Electro-Optical System Analysis and Design (Cornelius J)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 452

Electro-Optical System

Analysis and Design


A Radiometry Perspective

COrnellus.J. Willers

-----
Library of Congress Cataloging‐in‐Publication Data 
 
Willers, Cornelius J. 
  Electro‐optical system analysis and design: a radiometry perspective / Cornelius 
J Willers. 
       pages cm 
  Includes bibliographical references and index. 
  ISBN 978‐0‐8194‐9569‐3 
 1.  Electrooptics. 2.  Optical measurements. 3.  Electrooptical devices‐‐Design and 
construction.  I. Title.  
  TA1750.W55 2013 
  621.381ʹ045‐‐dc23 
2013002619 
                            
Published by 
 
SPIE—The International Society for Optical Engineering 
P.O. Box 10       
Bellingham, Washington  98227‐0010 USA 
Phone: +1 360 676 3290 
Fax: +1 360 647 1445 
Email:  [email protected] 
Web:  http://spie.org 
 
Copyright © 2013 Society of Photo‐Optical Instrumentation Engineers (SPIE) 
 
All rights reserved. No part of this publication may be reproduced or distributed 
in any form or by any means without written permission of the publisher. 
 
The content of this book reflects the work and thought of the author(s).  
Every  effort  has  been  made  to  publish  reliable and  accurate information  herein, 
but the publisher is not responsible for the validity of the information or for any 
outcomes resulting from reliance thereon. 
 
Cover image “Karoo Summer,” by Fiona Ewan Rowett (www.fionarowett.co.za), 
used with permission. 
 
Printed in the United States of America. 
First printing 
 

 
Preface

If you have an apple and I have an apple


and we exchange apples,
then you and I will still each have one apple.
But if you have an idea and I have an idea
and we exchange these ideas,
then each of us will have two ideas.
George Bernard Shaw

On Sharing

Teachers cross our paths in life. Some teachers have names, others leave
their marks anonymously. Among my teachers at the Optical Sciences
Center at the University of Arizona were James Palmer, Eustace Dereniak,
and Jack Gaskill. They freely shared their knowledge with their students.
Some teachers teach through the pages of their books, and here I have
to thank Bill Wolfe, George Zissis, and many more. Many years ago,
R. Barry Johnson presented a short course which influenced my career
most decisively.
The intent with this book is to now share some of my experience, ac-
cumulated through years of practical radiometry: design, measurements,
modeling, and simulation of electro-optical systems. The material pre-
sented here builds upon the foundation laid at the Optical Sciences Center.
I had the opportunity to share this material in an academic environment
at graduate level in an engineering school, thereby clarifying key concepts.
Beyond the mathematics and dry theory lies a rich world full of subtle in-
sights, which I try to elucidate. May this book help you, the reader, grow
in insight and share with others.

Reductionism, Synthesis, and Design

The reductionist approach holds the view that an arbitrarily complex sys-
tem can be understood by reducing the system to many, smaller systems

xxiii
xxiv Preface

that can be understood. This view is based on the premise that the com-
plex system is considered to be the sum of its parts, and that by under-
standing the parts, the sum can be understood. While the reductionist
approach certainly has weaknesses, this approach works well for the class
of problems considered in this book. The methodology followed here is to
develop the theory concisely for simple cases, developing a toolset and a
clear understanding of the fundamentals.
The real world does not comprise loose parts and simple systems.
Once the preliminaries are out the way, we proceed to consider more com-
plex concepts such as sensors, signatures, and simple systems comprising
sources, a medium, and a receiver. Using these concepts and the tools de-
veloped in this book, the reader should be able to design a system of any
complexity. Two concurrent themes appear throughout the book: frag-
menting a complex problem into simple building blocks, and synthesizing
(designing) complex systems from smaller elements. In any design pro-
cess, these two actions take place interactively, mutually supporting each
other. In this whirlpool of analysis and synthesis, uncontrolled external
factors (e.g., the atmosphere, noise) influence the final outcome. This is
where the academic theory finds engineering application in the real world.
This book aims to demonstrate how to proceed along this road.
Toward the end of the book, the focus shifts from a component-level
view to an integrated-system view, where the ‘system’ comprises a (sim-
ple or composite) source, an intervening medium, and a sensor. Many
real-world electro-optical applications require analysis and design at this
integrated-system level. Analysis and design, as a creative synthesis of
something new, cannot be easily taught other than by example. For this
purpose several case studies are presented. The case studies are brief and
only focus on single aspects of the various designs. Any real design pro-
cess would require a much more detailed process, beyond the scope of this
book.

General Comments

The purpose with this book is to enable the reader to find solutions to real-
world problems. The focus is on the application of radiometry in various
analysis and design scenarios. It is essential, however, to build on the foun-
dation of solid theoretical understanding, and gain insight beyond graphs,
tables and equations. Therefore, this book does not attempt to provide an
extensive set of ready-to-use equations and data, but rather strives to pro-
vide insight into hidden subtleties in the field. The atmosphere provides
opportunity for a particularly rich set of intriguing observations.
Preface xxv

The strict dictionary definition of ‘radiometry’ is the measurement


of optical flux. In this book, the term ‘radiometry’ is used in its wider
context to specifically cover the calculation of flux as well. This wider
definition is commonly used by practitioners in the field to cover all forms
of manipulation, including creation, measurement, calculation, modeling,
and simulation of optical flux. The focus of this book is not on radiometric
measurement but on the analysis and modeling of measured data, and the
design of electro-optical systems.
Antoine de Saint-Exupèry once wrote, “You know you’ve achieved
perfection in design, not when you have nothing more to add, but when
you have nothing more to take away.” The painful aspect of writing a book
is to decide what not to include. This book could contain more content on
radiometric measurement, emissivity measurement, properties of different
types of infrared detectors, or reference information on optical material
properties; however, these topics are already well covered by other excel-
lent books, much better than can be achieved in the limited scope of this
book.
The book provides a number of problems, some with worked solu-
tions. The scope of problems in the early chapters tend to be smaller,
whereas the problems in later chapters tend to be wider in scope. The
more-advanced problems require numerical solutions. Although it is cer-
tainly possible to read the book without doing the advanced problems, the
reader is urged to spend time mastering the skills to do these calculations.
This investment will pay off handsomely in the future. Some of the prob-
lems require data not readily found in book format. The data packages
are identified (e.g., DP01) and are obtainable from the pyradi website (see
Section D.3.4).
To the uninitiated, the broader field of radiometry is dangerous terri-
tory, with high potential for errors and not-so-obvious pitfalls. Our work in
the design labs, on field measurement trials, and in the academic environ-
ment led to the development of a set of best practices, called the ‘Golden
Rules,’ which strives to minimize the risk error. Some of these principles
come from James Palmer’s class, while most were stripes hard earned in
battle. The readers are urged to study, use, and expand these best practices
in their daily work. Any feedback, on the golden rules or any other aspect
of the book, would be appreciated.
A book is seldom the work of one mind only; it is the result of a road
traveled with companions. Along this road are many contributors, both
direct and inadvertent. My sincere thanks to all who made their precious
time and resources available in this endeavor. My sincere thanks goes to
xxvi Preface

Riana Willers for patience and support, as co-worker on our many projects
— her light footprints fall densely on every single page in this book: advis-
ing, scrutinizing every detail, debating symbols and sentences, editing text
and graphics, compiling the nomenclature and index, and finally, acting as
chapter contributor. Riana is indeed the ghost writer of this book! Fiona
Ewan Rowett for permission to use her exquisite “Karoo Summer” on the
front cover. The painting beautifully expresses not only the hot, semi-arid
Karoo plateau in South Africa, but also expresses radiated light and vi-
brant thermal energy, the subject of this book. My teachers at the Optical
Sciences Center who laid the early foundation for this work. Ricardo San-
tos and Fábio Alves for contributing to the chapter on infrared detector
theory and modeling. The pyradi team for contributing their time toward
building a toolkit of immense value to readers of this book. Derek Griffith
for the visual and near-infrared reflectance measurements. Hannes Calitz
for the spectral measurements, and Azwitamisi Mudau for the imaging
infrared measurements. Dr Munir Eldesouki from KACST for permis-
sion to use the Bunsen flame measured data in the book. The many col-
leagues, co-workers, and students at Kentron (now Denel Dynamics), the
CSIR, KACST, and the University of Pretoria for influencing some aspect
of the book. Scott McNeill and Tim Lamkins for patience and guiding me
through the publication process. Scott’s untiring patience in detailed cor-
rection deserves special mention. Eustace Dereniak for encouraging me
to submit the book for publication. Barbara Grant, Eustace Dereniak and
an anonymous reviewer for greatly influencing the book in its final form.
Finally, Dirk Bezuidenhout, and the CSIR for supporting the project so
generously in the final crucial months before publication.
Mark Twain wrote that he did not allow his schooling to get in the
way of his education. It is my wish that you, my esteemed reader, will
delve beyond these written words into the deeper insights. Someone else
said that the art of teaching is the art of assisting in discovery. May you
discover many rich insights through these pages.
Nelis Willers
Hartenbos
March 2013
Contents

Nomenclature xvii

Preface xxiii

1 Electro-Optical System Design 1


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 The Principles of Systems Design . . . . . . . . . . . . . . . 2
1.2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 The design process . . . . . . . . . . . . . . . . . . . . 2
1.2.3 Prerequisites for design . . . . . . . . . . . . . . . . . 3
1.2.4 Product development approaches . . . . . . . . . . . 4
1.2.5 Lifecycle phases . . . . . . . . . . . . . . . . . . . . . . 4
1.2.6 Parallel activities during development . . . . . . . . . 7
1.2.7 Specifications . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.8 Performance measures and figures of merit . . . . . 10
1.2.9 Value systems and design choices . . . . . . . . . . . 11
1.2.10 Assumptions during design . . . . . . . . . . . . . . . 11
1.2.11 The design process revisited . . . . . . . . . . . . . . 12
1.3 Electro-Optical Systems and System Design . . . . . . . . . 14
1.3.1 Definition of an electro-optical system . . . . . . . . . 14
1.3.2 Designing at the electro-optical-system level . . . . . 15
1.3.3 Electro-optical systems modeling and simulation . . 16
1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2 Introduction to Radiometry 19
2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Radiometry Nomenclature . . . . . . . . . . . . . . . . . . . 23
2.3.1 Definition of quantities . . . . . . . . . . . . . . . . . 23
2.3.2 Nature of radiometric quantities . . . . . . . . . . . . 25
2.3.3 Spectral quantities . . . . . . . . . . . . . . . . . . . . 25
2.3.4 Material properties . . . . . . . . . . . . . . . . . . . . 27
2.4 Linear Angle . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

vii
viii Contents

2.5 Solid Angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28


2.5.1 Geometric and projected solid angle . . . . . . . . . . 28
2.5.2 Geometric solid angle of a cone . . . . . . . . . . . . 29
2.5.3 Projected solid angle of a cone . . . . . . . . . . . . . 31
2.5.4 Geometric solid angle of a flat rectangular surface . 32
2.5.5 Projected solid angle of a flat rectangular surface . . 32
2.5.6 Approximation of solid angle . . . . . . . . . . . . . . 33
2.5.7 Projected area of a sphere . . . . . . . . . . . . . . . . 34
2.5.8 Projected solid angle of a sphere . . . . . . . . . . . . 35
2.6 Radiance and Flux Transfer . . . . . . . . . . . . . . . . . . 35
2.6.1 Conservation of radiance . . . . . . . . . . . . . . . . 35
2.6.2 Flux transfer through a lossless medium . . . . . . . 37
2.6.3 Flux transfer through a lossy medium . . . . . . . . . 38
2.6.4 Sources and receivers of arbitrary shape . . . . . . . 38
2.6.5 Multi-spectral flux transfer . . . . . . . . . . . . . . . 39
2.7 Lambertian Radiators and the Projected Solid Angle . . . 41
2.8 Spatial View Factor or Configuration Factor . . . . . . . . 43
2.9 Shape of the Radiator . . . . . . . . . . . . . . . . . . . . . . 44
2.9.1 A disk . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.9.2 A sphere . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.10 Photometry and Color . . . . . . . . . . . . . . . . . . . . . 45
2.10.1 Photometry units . . . . . . . . . . . . . . . . . . . . . 45
2.10.2 Eye spectral response . . . . . . . . . . . . . . . . . . 46
2.10.3 Conversion to photometric units . . . . . . . . . . . . 47
2.10.4 Brief introduction to color coordinates . . . . . . . . 48
2.10.5 Color-coordinate sensitivity to source spectrum . . . 49
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3 Sources 57
3.1 Planck Radiators . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.1.1 Planck’s radiation law . . . . . . . . . . . . . . . . . . 60
3.1.2 Wien’s displacement law . . . . . . . . . . . . . . . . 62
3.1.3 Stefan–Boltzmann law . . . . . . . . . . . . . . . . . . 63
3.1.4 Summation approximation of Planck’s law . . . . . . 64
3.1.5 Summary of Planck’s law . . . . . . . . . . . . . . . . 65
3.1.6 Thermal radiation from common objects . . . . . . . 65
3.2 Emissivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.2.1 Kirchhoff’s law . . . . . . . . . . . . . . . . . . . . . . 69
3.2.2 Flux transfer between a source and receiver . . . . . 70
3.2.3 Grey bodies and selective radiators . . . . . . . . . . 71
3.2.4 Radiation from low-emissivity surfaces . . . . . . . . 73
3.2.5 Emissivity of cavities . . . . . . . . . . . . . . . . . . . 74
Contents ix

3.3 Aperture Plate in front of a Blackbody . . . . . . . . . . . . 75


3.4 Directional Surface Reflectance . . . . . . . . . . . . . . . . 75
3.4.1 Roughness and scale . . . . . . . . . . . . . . . . . . . 76
3.4.2 Reflection geometry . . . . . . . . . . . . . . . . . . . 77
3.4.3 Reflection from optically smooth surfaces . . . . . . . 77
3.4.4 Fresnel reflectance . . . . . . . . . . . . . . . . . . . . 78
3.4.5 Bidirectional reflection distribution function . . . . . 80
3.5 Directional Emissivity . . . . . . . . . . . . . . . . . . . . . . 83
3.6 Directional Reflectance and Emissivity in Nature . . . . . . 85
3.7 The Sun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

4 Optical Media 97
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.2 Optical Mediums . . . . . . . . . . . . . . . . . . . . . . . . 98
4.2.1 Lossy mediums . . . . . . . . . . . . . . . . . . . . . . 98
4.2.2 Path radiance . . . . . . . . . . . . . . . . . . . . . . . 99
4.2.3 General law of contrast reduction . . . . . . . . . . . 102
4.2.4 Optical thickness . . . . . . . . . . . . . . . . . . . . . 103
4.2.5 Gas radiator sources . . . . . . . . . . . . . . . . . . . 103
4.3 Inhomogeneous Media and Discrete Ordinates . . . . . . . 104
4.4 Effective Transmittance . . . . . . . . . . . . . . . . . . . . . 105
4.5 Transmittance as Function of Range . . . . . . . . . . . . . . 108
4.6 The Atmosphere as Medium . . . . . . . . . . . . . . . . . . 108
4.6.1 Atmospheric composition and attenuation . . . . . . 108
4.6.2 Atmospheric molecular absorption . . . . . . . . . . 111
4.6.3 Atmospheric aerosols and scattering . . . . . . . . . . 112
4.6.4 Atmospheric transmittance windows . . . . . . . . . 116
4.6.5 Atmospheric path radiance . . . . . . . . . . . . . . . 118
4.6.6 Practical consequences of path radiance . . . . . . . . 120
4.6.7 Looking up at and looking down on the earth . . . . 121
4.6.8 Atmospheric water-vapor content . . . . . . . . . . . 121
4.6.9 Contrast transmittance in the atmosphere . . . . . . 124
4.6.10 Meteorological range and aerosol scattering . . . . . 127
4.7 Atmospheric Radiative Transfer Codes . . . . . . . . . . . . 129
4.7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.7.2 Modtran™ . . . . . . . . . . . . . . . . . . . . . . . . 129
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

5 Optical Detectors 135


5.1 Historical Overview . . . . . . . . . . . . . . . . . . . . . . . 135
x Contents

5.2 Overview of the Detection Process . . . . . . . . . . . . . . 136


5.2.1 Thermal detectors . . . . . . . . . . . . . . . . . . . . 136
5.2.2 Photon detectors . . . . . . . . . . . . . . . . . . . . . 138
5.2.3 Normalizing responsivity . . . . . . . . . . . . . . . . 140
5.2.4 Detector configurations . . . . . . . . . . . . . . . . . 140
5.3 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
5.3.1 Noise power spectral density . . . . . . . . . . . . . . 141
5.3.2 Johnson noise . . . . . . . . . . . . . . . . . . . . . . . 142
5.3.3 Shot noise . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.3.4 Generation–recombination noise . . . . . . . . . . . . 144
5.3.5 1/ f noise . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.3.6 Temperature-fluctuation noise . . . . . . . . . . . . . 145
5.3.7 Interface electronics noise . . . . . . . . . . . . . . . . 146
5.3.8 Noise considerations in imaging systems . . . . . . . 146
5.3.9 Signal flux fluctuation noise . . . . . . . . . . . . . . . 146
5.3.10 Background flux fluctuation noise . . . . . . . . . . . 147
5.3.11 Detector noise equivalent power and detectivity . . . 147
5.3.12 Combining power spectral densities . . . . . . . . . . 149
5.3.13 Noise equivalent bandwidth . . . . . . . . . . . . . . 149
5.3.14 Time-bandwidth product . . . . . . . . . . . . . . . . 150
5.4 Thermal Detectors . . . . . . . . . . . . . . . . . . . . . . . . 151
5.4.1 Principle of operation . . . . . . . . . . . . . . . . . . 151
5.4.2 Thermal detector responsivity . . . . . . . . . . . . . 152
5.4.3 Resistive bolometer . . . . . . . . . . . . . . . . . . . . 155
5.4.4 Pyroelectric detector . . . . . . . . . . . . . . . . . . . 157
5.4.5 Thermoelectric detector . . . . . . . . . . . . . . . . . 159
5.4.6 Photon-noise-limited operation . . . . . . . . . . . . . 161
5.4.7 Temperature-fluctuation-noise-limited operation . . 163
5.5 Properties of Crystalline Materials . . . . . . . . . . . . . . 163
5.5.1 Crystalline structure . . . . . . . . . . . . . . . . . . . 164
5.5.2 Occupation of electrons in energy bands . . . . . . . 165
5.5.3 Electron density in energy bands . . . . . . . . . . . . 166
5.5.4 Semiconductor band structure . . . . . . . . . . . . . 169
5.5.5 Conductors, semiconductors, and insulators . . . . . 170
5.5.6 Intrinsic and extrinsic semiconductor materials . . . 171
5.5.7 Photon-electron interactions . . . . . . . . . . . . . . 174
5.5.8 Light absorption in semiconductors . . . . . . . . . . 176
5.5.9 Physical parameters for important semiconductors . 179
5.6 Overview of the Photon Detection Process . . . . . . . . . . 179
5.6.1 Photon detector operation . . . . . . . . . . . . . . . . 179
5.6.2 Carriers and current flow in semiconductor material 179
5.6.3 Photon absorption and majority/minority carriers . 180
Contents xi

5.6.4 Quantum efficiency . . . . . . . . . . . . . . . . . . . 181


5.7 Detector Cooling . . . . . . . . . . . . . . . . . . . . . . . . . 183
5.8 Photoconductive Detectors . . . . . . . . . . . . . . . . . . . 187
5.8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 187
5.8.2 Photoconductive detector signal . . . . . . . . . . . . 187
5.8.3 Bias circuits for photoconductive detectors . . . . . . 189
5.8.4 Frequency response of photoconductive detectors . . 190
5.8.5 Noise in photoconductive detectors . . . . . . . . . . 191
5.9 Photovoltaic Detectors . . . . . . . . . . . . . . . . . . . . . 193
5.9.1 Photovoltaic detector operation . . . . . . . . . . . . . 193
5.9.2 Diode current–voltage relationship . . . . . . . . . . 196
5.9.3 Bias configurations for photovoltaic detectors . . . . 197
5.9.4 Frequency response of a photovoltaic detector . . . . 202
5.9.5 Noise in photovoltaic detectors . . . . . . . . . . . . . 203
5.9.6 Detector performance modeling . . . . . . . . . . . . 207
5.10 Impact of Detector Technology on Infrared Systems . . . . 210
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

6 Sensors 221
6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
6.2 Anatomy of a Sensor . . . . . . . . . . . . . . . . . . . . . . 221
6.3 Introduction to Optics . . . . . . . . . . . . . . . . . . . . . . 223
6.3.1 Optical elements . . . . . . . . . . . . . . . . . . . . . 223
6.3.2 First-order ray tracing . . . . . . . . . . . . . . . . . . 225
6.3.3 Pupils, apertures, stops, and f -number . . . . . . . . 226
6.3.4 Optical sensor spatial angles . . . . . . . . . . . . . . 230
6.3.5 Extended and point target objects . . . . . . . . . . . 232
6.3.6 Optical aberrations . . . . . . . . . . . . . . . . . . . . 232
6.3.7 Optical point spread function . . . . . . . . . . . . . . 235
6.3.8 Optical systems . . . . . . . . . . . . . . . . . . . . . . 236
6.3.9 Aspheric lenses . . . . . . . . . . . . . . . . . . . . . . 237
6.3.10 Radiometry of a collimator . . . . . . . . . . . . . . . 238
6.4 Spectral Filters . . . . . . . . . . . . . . . . . . . . . . . . . . 240
6.5 A Simple Sensor Model . . . . . . . . . . . . . . . . . . . . . 240
6.6 Sensor Signal Calculations . . . . . . . . . . . . . . . . . . . 242
6.6.1 Detector signal . . . . . . . . . . . . . . . . . . . . . . 242
6.6.2 Source area variations . . . . . . . . . . . . . . . . . . 244
6.6.3 Complex sources . . . . . . . . . . . . . . . . . . . . . 245
6.7 Signal Noise Reference Planes . . . . . . . . . . . . . . . . . 245
6.8 Sensor Optical Throughput . . . . . . . . . . . . . . . . . . . 248
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
xii Contents

7 Radiometry Techniques 255


7.1 Performance Measures . . . . . . . . . . . . . . . . . . . . . 255
7.1.1 Role of performance measures . . . . . . . . . . . . . 255
7.1.2 General definitions . . . . . . . . . . . . . . . . . . . . 256
7.1.3 Commonly used performance measures . . . . . . . 257
7.2 Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . 261
7.2.1 Solid angle spatial normalization . . . . . . . . . . . . 261
7.2.2 Effective value normalization . . . . . . . . . . . . . . 261
7.2.3 Peak normalization . . . . . . . . . . . . . . . . . . . . 262
7.2.4 Weighted mapping . . . . . . . . . . . . . . . . . . . . 263
7.3 Spectral Mismatch . . . . . . . . . . . . . . . . . . . . . . . . 264
7.4 Spectral Convolution . . . . . . . . . . . . . . . . . . . . . . 265
7.5 The Range Equation . . . . . . . . . . . . . . . . . . . . . . . 267
7.6 Pixel Irradiance in an Image . . . . . . . . . . . . . . . . . . 268
7.7 Difference Contrast . . . . . . . . . . . . . . . . . . . . . . . 271
7.8 Pulse Detection and False Alarm Rate . . . . . . . . . . . . 272
7.9 Validation Techniques . . . . . . . . . . . . . . . . . . . . . . 275
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276

8 Optical Signatures 279


8.1 Model for Optical Signatures . . . . . . . . . . . . . . . . . . 279
8.2 General Notes on Signatures . . . . . . . . . . . . . . . . . . 283
8.3 Reflection Signatures . . . . . . . . . . . . . . . . . . . . . . 284
8.4 Modeling Thermal Radiators . . . . . . . . . . . . . . . . . . 285
8.4.1 Emissivity estimation . . . . . . . . . . . . . . . . . . 287
8.4.2 Area estimation . . . . . . . . . . . . . . . . . . . . . . 288
8.4.3 Temperature estimation . . . . . . . . . . . . . . . . . 290
8.5 Measurement Data Analysis . . . . . . . . . . . . . . . . . . 292
8.6 Case Study: High-Temperature Flame Measurement . . . . 295
8.7 Case Study: Low-Emissivity Surface Measurement . . . . 295
8.8 Case Study: Cloud Modeling . . . . . . . . . . . . . . . . . 297
8.8.1 Measurements . . . . . . . . . . . . . . . . . . . . . . 297
8.8.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
8.8.3 Relative contributions to the cloud signature . . . . . 300
8.9 Case Study: Contrast Inversion/Temperature Cross-Over . 300
8.10 Case Study: Thermally Transparent Paints . . . . . . . . . . 301
8.11 Case Study: Sun-Glint . . . . . . . . . . . . . . . . . . . . . . 302
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304

9 Electro-Optical System Analysis 309


9.1 Case Study: Flame Sensor . . . . . . . . . . . . . . . . . . . 309
Contents xiii

9.2 Case Study: Object Appearance in an Image . . . . . . . . . 311


9.3 Case Study: Solar Cell Analysis . . . . . . . . . . . . . . . . 315
9.3.1 Observations . . . . . . . . . . . . . . . . . . . . . . . 315
9.3.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 316
9.4 Case Study: Laser Rangefinder Range Equation . . . . . . . 321
9.4.1 Noise equivalent irradiance . . . . . . . . . . . . . . . 321
9.4.2 Signal irradiance . . . . . . . . . . . . . . . . . . . . . 322
9.4.3 Lambertian target reflectance . . . . . . . . . . . . . . 323
9.4.4 Lambertian targets against the sky . . . . . . . . . . . 324
9.4.5 Lambertian targets against terrain . . . . . . . . . . . 325
9.4.6 Detection range . . . . . . . . . . . . . . . . . . . . . . 326
9.4.7 Example calculation . . . . . . . . . . . . . . . . . . . 326
9.4.8 Specular reflective surfaces . . . . . . . . . . . . . . . 327
9.5 Case Study: Thermal Imaging Sensor Model . . . . . . . . 330
9.5.1 Electronic parameters . . . . . . . . . . . . . . . . . . 330
9.5.2 Noise expressed as D ∗ . . . . . . . . . . . . . . . . . . 331
9.5.3 Noise in the entrance aperture . . . . . . . . . . . . . 331
9.5.4 Noise in the object plane . . . . . . . . . . . . . . . . 332
9.5.5 Example calculation . . . . . . . . . . . . . . . . . . . 333
9.6 Case Study: Atmosphere and Thermal Camera Sensitivity 334
9.7 Case Study: Infrared Sensor Radiometry . . . . . . . . . . . 337
9.7.1 Flux on the detector . . . . . . . . . . . . . . . . . . . 337
9.7.2 Focused optics . . . . . . . . . . . . . . . . . . . . . . 339
9.7.3 Out-of-focus optics . . . . . . . . . . . . . . . . . . . . 342
9.8 Case Study: Bunsen Burner Flame Characterization . . . . 344
9.8.1 Data analysis workflow . . . . . . . . . . . . . . . . . 345
9.8.2 Instrument calibration . . . . . . . . . . . . . . . . . . 346
9.8.3 Measurements . . . . . . . . . . . . . . . . . . . . . . 348
9.8.4 Imaging-camera radiance results . . . . . . . . . . . . 350
9.8.5 Imaging-camera flame-area results . . . . . . . . . . . 352
9.8.6 Flame dynamics . . . . . . . . . . . . . . . . . . . . . 353
9.8.7 Thermocouple flame temperature results . . . . . . . 354
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356

10 Golden Rules 365


10.1 Best Practices in Radiometric Calculation . . . . . . . . . . 365
10.2 Start from First Principles . . . . . . . . . . . . . . . . . . . 365
10.3 Understand Radiance, Area, and Solid Angle . . . . . . . . 366
10.4 Build Mathematical Models . . . . . . . . . . . . . . . . . . 366
10.5 Work in Base SI Units . . . . . . . . . . . . . . . . . . . . . . 367
10.6 Perform Dimensional Analysis . . . . . . . . . . . . . . . . 367
10.7 Draw Pictures . . . . . . . . . . . . . . . . . . . . . . . . . . 368
xiv Contents

10.8 Understand the Role of π . . . . . . . . . . . . . . . . . . . . 371


10.9 Simplify Spatial Integrals . . . . . . . . . . . . . . . . . . . . 371
10.10 Graphically Plot Intermediate Results . . . . . . . . . . . . 372
10.11 Follow Proper Coding Practices . . . . . . . . . . . . . . . . 372
10.12 Verify and Validate . . . . . . . . . . . . . . . . . . . . . . . 372
10.13 Do It Right — the First Time! . . . . . . . . . . . . . . . . . 373
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373

A Reference Information 375


Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383

B Infrared Scene Simulation 385


B.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
B.2 Simulation as Knowledge-Management Tool . . . . . . . . 386
B.3 Simulation Validation Framework . . . . . . . . . . . . . . 386
B.4 Optical Signature Rendering . . . . . . . . . . . . . . . . . . 387
B.4.1 Image rendering . . . . . . . . . . . . . . . . . . . . . 391
B.4.2 Rendering equation . . . . . . . . . . . . . . . . . . . 393
B.5 The Effects of Super-Sampling and Aliasing . . . . . . . . . 396
B.6 Solar Reflection, Sky Background, and Color Ratio . . . . . 398
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401

C Multidimensional Ray Tracing 403

D Techniques for Numerical Solution 407


D.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
D.2 The Requirement . . . . . . . . . . . . . . . . . . . . . . . . 407
D.3 Matlab® and Python™ as Calculators . . . . . . . . . . . . . 409
D.3.1 Matlab® . . . . . . . . . . . . . . . . . . . . . . . . . . 410
D.3.2 Numpy and Scipy . . . . . . . . . . . . . . . . . . . . 410
D.3.3 Matlab® and Python™ for radiometry calculations . 410
D.3.4 The pyradi toolkit . . . . . . . . . . . . . . . . . . . . 411
D.4 Helper Functions . . . . . . . . . . . . . . . . . . . . . . . . 411
D.4.1 Planck exitance functions . . . . . . . . . . . . . . . . 412
D.4.2 Spectral filter function . . . . . . . . . . . . . . . . . . 413
D.4.3 Spectral detector function . . . . . . . . . . . . . . . . 415
D.5 Fully Worked Examples . . . . . . . . . . . . . . . . . . . . . 417
D.5.1 Flame sensor in Matlab® . . . . . . . . . . . . . . . . 417
D.5.2 Flame detector in Python™ . . . . . . . . . . . . . . . 421
D.5.3 Object appearance in an image in Python™ . . . . . 424
D.5.4 Color-coordinate calculations in Python™ . . . . . . 430
D.5.5 Flame-area calculation in Matlab® . . . . . . . . . . . 434
D.5.6 The range equation solved in Python™ . . . . . . . . 435
Contents xv

D.5.7 Pulse detection and false alarm rate calculation . . . 436


D.5.8 Spatial integral of a flat plate in Matlab® . . . . . . . 437
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440

E Solutions to Selected Problems 441


E.1 Solid Angle Definition . . . . . . . . . . . . . . . . . . . . . 441
E.2 Solid Angle Approximation . . . . . . . . . . . . . . . . . . 441
E.3 Solid Angle Application (Problem 2.4) . . . . . . . . . . . . 448
E.4 Flux Transfer Application . . . . . . . . . . . . . . . . . . . 448
E.5 Simple Detector System (Problem 6.2) . . . . . . . . . . . . 450
E.6 InSb Detector Observing a Cloud (Problem 8.2) . . . . . . . 451
E.7 Sensor Optimization (Problem 9.1) . . . . . . . . . . . . . . 459

F Additional Reading and Credits 471


F.1 Additional Reading . . . . . . . . . . . . . . . . . . . . . . . 471
F.2 Credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472

Index 477
Chapter 1
Electro-Optical System Design

There are two ways of constructing a design.


One way is to make it so simple that there are obviously no deficiencies.
Another way is to make it so complicated that there are no obvious deficiencies.
Sir Charles Antony Richard Hoare

1.1 Introduction

Optical flux has a source and, for the applications considered in this book,
also a destination (sometimes called a receiver or absorber). Having a
source and a destination, it must also have a channel, path, or medium.
The approach in this book is to consider all three components interacting
with the flux. The presence of more than one component implies that the
flux can be seen to operate in a system context, with elements of the system
including at least a source, a medium, and a receiver. Accepting the notion
of an electro-optical system, the system can be subjected to actions such as
analysis, design, and testing.
The fundamental approach taken in this book is that an electro-optical
system should be considered as a system with cause-and-effect implica-
tions. Although the components in the system may not interact in a physi-
cal or causal manner, the performance of the system can be expressed as a
set of relationships. In these relationships the system’s performance leads
to interdependencies between parameters of the various components. For
example, the maximum range performance of a laser rangefinder depends
on laser power, atmospheric transmittance, and sensor noise all of which
require tradeoff analysis to optimize the system. Hence, notwithstanding
the autonomy of each component, from a system perspective, the design
process induces a synthetic parameter interdependence between the vari-
ous components in the system.
The premise for this approach asserts that electro-optical systems can
be optimized by trading off parameters between different components of

1
2 Chapter 1

the system. Such a capability provides freedom and power to optimize the
system by appropriate design choices. Returning to the laser rangefinder,
the cost and complexity of laser power can be traded against the cost and
complexity of noise in the receiver: the selection of the appropriate low-
noise design may ease the burden of higher-power laser technology.
This chapter provides an overview of the generic design process, there-
by providing a basis for electro-optical system design. The basic philoso-
phy outlined here has application in practically all design situations. The
ideas presented come from system engineering principles, practical expe-
rience, and common sense. This chapter structures the activities that most
designers use intuitively. Although it is true that simple systems can be
handled intuitively, the design of complex systems requires stricter disci-
pline. The thoughtless execution of a procedure can be dangerous, but a
well-structured approach provides a better basis for a sound design.

1.2 The Principles of Systems Design

1.2.1 Definitions

A system is a collection of items required to perform a task in order to


achieve a goal. All lower-level items that may influence the goal, however
remote, should be included as part of the system. Thus, when consider-
ing the system as a whole, all influences and effects should be accounted
for, and the design can therefore be optimal within the constraints of that
particular system.
System engineering is the entire management process required to bring
about and maintain a complex system. In this chapter, particular emphasis
is placed on specification practices and design analysis, as these lie at the
very core of electro-optical system design.
A subsystem or system segment is a small system in itself, usually with
a common goal, common technology, or cause-and-effect theme. There is
a hierarchy of levels (such as system, subsystem, and subsubsystem) to
signify levels of sub-ness. The word ‘system’ is used to indicate something
consisting of a number of smaller items and therefore applies to all levels.
Where necessary, the terms ‘subsystem’ or ‘greater system’ are used to
denote level.

1.2.2 The design process

Solution design can be defined as the synthesis of smaller entities con-


tributing to the whole of a solution. Design is an art, and a brief definition
Electro-Optical System Design 3

˜Š•œ

—˜   ˜–™•ŽŽ
˜–™Š›Ž
Žœ’—

—’’Š• Ž›ŠŽ
Ž›˜›–Š—ŒŽ
Žœ’— Žœ’—

Figure 1.1 The basic design process.

cannot capture the subtleties inherent in the design process or the require-
ments for a good design. However, it may be enlightening to attempt a
definition. The definition will be given in two phases: a short version in
this section, and a more-complete version in Section 1.2.11.
Any design should start with a clear goal to be achieved and an initial
design or design concept. The design goal can be translated into perfor-
mance measures (see Section 1.2.8) by predicting performance in calcu-
lation, simulation, or by building the item. Once a system is built, the
measured performance is compared to the design goals. If the measured
performance compares acceptably with the goals, the design is completed.
If the goals are not met by the design, the design details are iterated to find
a better design, mostly by improving certain aspects of the current design.
In certain cases the performance may not meet the system requirements
but is acceptable to the customer, in which case the user requirement and
system specifications are altered. This process is summarized in Figure 1.1.

1.2.3 Prerequisites for design

There are certain prerequisites that have to be satisfied before any design
work can start. This may sound ludicrous, but these obvious prerequisites
are often not met! The most important are:

1. There must be a concisely defined need. The developer must often


help the client express the user-need in practical system requirements.
Retrospectively, the client must also clearly understand the content of
the developer’s offer.

2. There must be adequate, approved resources in the form of labor, fund-


ing, schedule, and infrastructure.

3. There must be a knowledge base and a capability to design, develop,


4 Chapter 1

test, and produce the system.

4. There must be a well-established technological base, including availabil-


ity and capability, to support the development.

5. There must be a clear expression of mutual expectations between the


client and developer. These expectations cover the deliverable prod-
uct, the scope and nature of mutual support, the definition of contract
completion, and delivery.

1.2.4 Product development approaches

Two different approaches are used in developing one-off versus mass-


produced products. For mass-produced products, the primary goal during
development is the compilation of build and test documentation suitable for
use by the production line. The hardware products built during the devel-
opment phases are mainly used to verify the design and documentation.
An example of this situation is the volume production of products such
as vehicles. For one-off product development, the requirement for docu-
mentation may be just as strict, but the product is proof of design and, at
the same time, also the final deliverable product. An example of a one-off
product is a large telescope, where each instance of the product is essen-
tially unique.

1.2.5 Lifecycle phases

The lifecycle of a complex system can be divided into several distinct


phases, each with a unique goal. It is difficult to isolate any one phase
at any one instant because the phases flow into each other. If the formal
system-engineering procedures are strictly applied, each phase culminates
in a design review where the results are reviewed in terms of the goals for
that phase. The decisions taken during the design review may lead to a
change in strategy or system definition.
Not all of the phases shown here are always present. The need for
a development phase depends on the complexity of the system and the
maturity of similar systems. For example, a rework of an earlier design
does not require many of the initial phases.
The various product lifecycle phases shown in Figure 1.2 can be sum-
marized as follows:

Concept study: The concept study is a high-level study into different con-
cepts that may be employed to achieve goals. Extensive use is made
Electro-Optical System Design 5

œŽ› Žšž’›Ž–Ž—
˜—ŒŽ™ ž¢
Ž’—’’˜— ž¢

Š› Š›Ž
ŽŸŽ•˜™–Ž— ›˜žŒ
ŽŸŠ•žŠ’˜— ž—Ž›
—žœ›’Š•’£Š’˜— ˜™Ž›Š’˜—Š•
žŠ•’’ŒŠ’˜— Œ˜—’’˜—œ Š›’˜žœ
Š—Š•¢œŽœ
›˜žŒ’˜— ‘›˜ž‘˜ž
’–Ž

•’ŽŒ¢Œ•ŽDZ
™Ž›Š’˜—Š• œŽ›Ÿ’ŒŽ œž™™˜› ˜Ž•’—
Š— –Š’—Ž—Š—ŒŽ Š— œ’–ž•Š’˜— Ž™•˜¢–Ž—
™›ŠŽ ‘›ŽŠ
ŽŸŽ•˜™–Ž— ›˜žŒ
ŽŸŠ•žŠ’˜— ž—Ž› Š›”Ž
—žœ›’Š•’£Š’˜— ˜™Ž›Š’˜—Š•
žŠ•’’ŒŠ’˜— Œ˜—’’˜—œ

™›ŠŽ ›Ž›˜’
™Ž›Š’˜—Š• œŽ›Ÿ’ŒŽ œž™™˜›
Š— –Š’—Ž—Š—ŒŽ
’œ™˜œŠ•

Figure 1.2 Development phases for complex systems.

of experienced, knowledgeable people to express opinions and do


feasibility studies. These studies are concerned with functionality
and not with implementation details. At the end of this phase the
concepts, building blocks, and basic technologies required in the sys-
tem are identified.
Definition study: During the definition study, the overall system is di-
vided into subsystems or system segments. The subsystem func-
tional requirements are then further allocated to hardware or soft-
ware development requirements by trading off the required perfor-
mance of the various items in the subsystem. In large projects, a
formalized technical delegation structure must be developed. The
structure depends on the work contents and available labor. The fu-
ture development team will grow around this core of technical lead-
ers. The primary aim is to develop design structure and quantify
requirement specifications for the various subsystems and develop-
ment items.
Hardware development, prototype: The aim of the prototype phase is to
confirm that the hardware concept will indeed work. The hard-
ware/software constructed at this time is not suitable for the final
application, but it should perform all functions required (possibly at
reduced performance). Considerable field tests are performed during
6 Chapter 1

this phase. The work during this phase confirms the design approach
at a conceptual level. Furthermore, valuable hardware experience is
gained in the construction of the prototype.
Hardware development, experimental model: The hardware constructed
in the experimental-model phase should be suitable for the final ap-
plication, albeit with low confidence. All functions should be per-
formed, and the size and packaging should be according to specifi-
cation. Formal documentation-change control procedures are intro-
duced to keep track of changes. The output from this phase com-
prises the design data pack, hardware, and software, designed and
built to meet all specifications. In some cases the specifications are
not yet met, and there may be some hardware and software function-
ality not yet finalized.
Hardware development, advanced model: Most of the design issues are
cleared up toward the end of the advanced-model phase. The design
documentation, test methods, and specifications are finalized. Un-
less required for specification conformity, the design will not change
beyond this phase. During this phase the industrialization process is
far advanced, but hardware might still be built in the development
laboratories.
Industrialization: In the industrialization phase, the production processes
are finalized by production specialists to ease future production. The
design is only modified if necessary and then with approval from all
concerned. Although this phase falls late in the development time-
line, the industrialization personnel are involved in the design from
the very beginning. In this phase, all hardware will be built on the
production line.
System acceptance or qualification: System acceptance is the final evalu-
ation testing where the system is evaluated against the requirement
specification. After acceptance and approval by the user, full-scale
production can begin.
Production, Operational Service Support, Maintenance, and Disposal:
These phases are part of the system lifecycle but are not covered un-
der the present discussion.

Design reviews take place at predetermined times during the prod-


uct lifecycle, most often at the end of each phase. The goals with these
design reviews are to ascertain the design quality and applicability, assess
performance risks, producibility, and several other aspects. There are for-
mal design reviews where representatives from all affected interest groups
Electro-Optical System Design 7

partake to assess the impact of the design on their own areas. Informal
peer reviews are also very useful in that a quick answer can be obtained
while working on a problem. Even though design reviews are sometimes
regarded with apprehension and fear, they are very useful learning expe-
riences for a well-prepared designer.

1.2.6 Parallel activities during development

Recall that Figure 1.2 shows four parallel activities during product de-
velopment. The first activity, hardware development, was described in
Section 1.2.5. This hardware development activity is supported by equally
important but less-visible activities.
Product evaluation under operational conditions, e.g., during field tri-
als, is an important reality check because it provides real-world feedback of
the system’s performance. Deficiencies and weaknesses can be identified,
and limits of performance can be evaluated. Several operational-condition
tests are typically executed during the development of a complex system.
The nature of the operational tests shifts from initial exploration to evalu-
ation later in the design phases, leading up to the qualification acceptance
testing. Although laboratory tests are valuable, operational tests build
confidence in the system.
Modeling and simulation provide several benefits; for example, it can
provide a development environment for image processing algorithms, pro-
vide the capability to evaluate the system under conditions not possible in
the real world, or evaluate different design options. It stands to reason that
the models and simulation must be comprehensive and validated, relative
to the task at hand. It is essential that the design in the simulation match
the design in the hardware — there must be only one design, just different
instances thereof.
Some system environments require ongoing analysis of elements in
the external environment. One such example, the development of missile
countermeasures, requires an ongoing analysis of the operation of missile
threats. The outputs from these analysis tasks are used to influence some
aspect of the design in order to best respond to the external environment.
To obtain maximum benefit from these four development activities,
there should be a constant flow of information. Each of the activities
should constantly re-evaluate its own position in the context of learning
in the other activities. For example, the modeling and simulation activ-
ity should endeavor to most-accurately reflect the hardware status, and
likewise, the hardware activity can learn from simulation experiments.
8 Chapter 1

žŠ•’’ŒŠ’˜— ŠŒŒŽ™Š—ŒŽ Žœ —œŠ••Ž


œŽ› œ¢œŽ–
›Žšž’›Ž–Ž—


¢œŽ– šž ¢œŽ– Žœ

˜—
œ™ŽŒ’’ŒŠ’˜— ’›Ž ¢œŽ–

Š’
–Ž

Ž›
—œ

’—
—Ž›Š’˜— Žœ
‹› Ž

ž‹œ¢œŽ–

›Ž
œ™ŽŒ’’ŒŠ’˜— ž‹œ¢œŽ–œ

 Š
Š”

˜–™˜—Ž—

›
˜ 


Š
Žœ
—

˜–™˜—Ž—
œ™ŽŒ’’ŒŠ’˜— ˜–™˜—Ž—œ

Š—žŠŒž›Ž

Figure 1.3 The systems engineering V-chart for system decomposition and integration.

1.2.7 Specifications

1.2.7.1 Requirement allocation and integration

One of the central themes in system engineering is the concept of a spec-


ification hierarchy. High-level requirements are interpreted, and require-
ments are allocated to lower-level specifications. The divide-and-conquer
theory states that if the subsystems are designed to meet the lower-level
specifications, the integrated hardware should be able to meet the higher-
level specifications. Integration is the process of putting all items together
to form a system at a higher level. This concept is demonstrated in the sys-
tems engineering V-chart, 1 shown here in Figure 1.3. Appropriate testing
is done at each level, in accordance with the requirement at that level.
At any level there are requirements (or needs) and real or perceived
restrictions that must be reconciled in the next-lower-level specifications.
Requirements include aspects such as application environment (temper-
ature, pressure, vibration, shock, and humidity), required functions, re-
quired performance, electrical and mechanical interfaces, and so forth. Re-
strictions include aspects such as system restrictions (size, weight, power,
shape), technological limitations, and resource limitations (funding and la-
bor). Restrictions also include requirements imposed by other subsystems,
acting as restrictions on this subsystem. The main objective of specifica-
tion allocation is to break down the large system specification into smaller,
independently definable but interrelated specifications expressible in hard-
ware items that can be laboratory tested and validated.
Electro-Optical System Design 9

Requirements System Restrictions


specification
Specification allocation

Requirements System Restrictions


specification
Hardware integration

Requirements System Restrictions


specification

Requirements Hardware Restrictions


specification

Requirements Hardware Restrictions


specification

Figure 1.4 A schematic representation of the requirements allocation process.

The hierarchical allocation from higher to lower levels stops at any


level where the specified item is a well-defined, self-contained item that
can be conveniently managed and tested. The allocated specification should
only contain critical requirements, allowing the designer freedom to ex-
plore alternative options during design. The hardware or software is then
developed to meet the specification. In order to evaluate the performance
of the total system, the individual hardware items are then integrated into
larger systems, upward through the hierarchical specification structure.
The hierarchical approach therefore allows the separation of needs and re-
quirements during the initial phases, and also the integration of hardware
in the advanced development phases.
The process of specification allocation and integration is never termi-
nated. Throughout all development phases the process continues, albeit
with different intensities. This continual process is due to the fact that
new knowledge of the system is accumulated, leading to changes in def-
inition and improved specifications. Sometimes new information reveals
severe restrictions in the system as defined at that time. Sometimes it may
be possible to change the system definition, but if the design is in an ad-
vanced stage, the changes may not be implemented.
The hierarchical allocation process is shown schematically in Figure 1.4.
Note how the specification allocation to lower levels takes place in the
downward direction, whereas hardware integration takes place from the
bottom upward. It often happens that items are operational at a lower
10 Chapter 1

level, but during the integration process, the higher-level system does not
perform as expected. From the foregoing it is clear that the hierarchical
breakdown provides an ordered approach for the design and integration
of complex systems. However, the success of this approach depends totally
on the discipline and diligence of the people employing these methods.

1.2.7.2 Specification practices

A specification for a subsystem or hardware item should be a complete,


clear, and concise statement of the required functionality, minimum ac-
ceptable performance, and restrictions. A requirement that states that a
system must operate under ‘typical continental atmospheric conditions’ is
not a specification but a user need. The specification provides performance
measures that can be tested against. All requirements should be given in
measurable, laboratory-testable quantities. The specification should exist
in version-controlled documents for purposes of future reference and com-
munication. The specification should have sufficient information but not
too much detail or over-specification (unnecessary or overly tight require-
ments). Finally, the specification should be kept current with the project
development status. The practice of specification writing is well docu-
mented in system-engineering literature. 1

1.2.8 Performance measures and figures of merit

A figure of merit (FOM) or technical performance measure (TPM) is a


means to express some aspect of the system’s performance. Whereas a
specification is a fixed contractual objective, the FOM/TPM provides mea-
sured or predicted information about the system. FOM/TPMs can be used
at all levels of design to optimize the design at that particular level. Typical
examples include signal-to-noise ratio (SNR) as a function of bandwidth,
detection range under different conditions, or modulation transfer func-
tion (MTF) as a function of aperture diameter.
During the initial stages of design, a number of FOM/TPM measures
are selected to act as indicators of system performance. The first build of
a complex system does not always perform to the final specification. In
such cases the development plan makes provision for poorer performance
in initial stages but reaches the full performance at the end of develop-
ment. The planned relative growth in performance is represented by al-
locating appropriate TPM values at different stages of development. Dur-
ing the various development stages, the designer measures the system’s
FOM/TPM measures and compares it against the planned values. If the
system performs below the planned level, action plans are then initiated
Electro-Optical System Design 11

to rectify the situation.


A FOM is typically influenced by a number of different system param-
eters. As a design tool, the sensitivity of the FOM to system parameters
can be studied, and design trade-offs made.

1.2.9 Value systems and design choices

The designer is frequently faced with choices, and the best selection may
not be clear. In this regard a value system is of great help. A value system
is a hierarchical set of priorities and goals that helps the designer to make
difficult choices, by evaluating design options in terms of predetermined
priorities.
To compile the value system, list all of the important issues concerning
the user need, the hardware to be designed, and the design environment.
The aspects to be addressed should include the importance of development
time scale, product cost, performance, local content, reliability, maintain-
ability, upgradeability, etc. Any real value system is further complicated
by peripheral but important issues such as company policy and mission,
labor situation, profit motives, etc. The project value system should not be
contaminated by ulterior motives such as personal gain, particular inter-
ests or disinterests, and ‘old-boys’ understandings.
When faced with design choices, the designer can allocate points for
each entry in the value system, to each option. The option that best satisfies
the high-priority values is then the desired option.

1.2.10 Assumptions during design

When faced with problems in an examination, students are often exhorted


to make assumptions and continue to find a solution. Assumptions are
part of our everyday life, and people who cannot make or accept assump-
tions are under severe stress. Our readiness to make assumptions numbs
our consideration about the potential consequences of these assumptions.
This is particularly true about technical assumptions because very few de-
signers check all of their assumptions. The problem with assumptions lies
in the fact that erroneous assumptions are only realized when it is too
late, and the cost implications are potentially high. Unchecked assump-
tions can cause inadequate performance, catastrophic failures, and high
redesign costs. Clearly, something must be done to prevent these situa-
tions.
The assumption confirmation procedure proposed here is very effec-
12 Chapter 1

tive if applied with diligence and careful thought. The procedure is simple:

1. Write down all assumptions immediately after making them. Write


down as many as you can think of; some may seem trivial now but may
be important later. Collect all of the assumptions in a central database
or document.

2. Prioritize the assumptions by three independent ratings (each on a three-


point scale): severity or criticality, urgency, and risk. Add them up to
derive the overall priority for that assumption.

3. Review and confirm the top-priority assumptions, and ignore the low-
priority assumptions, as there is not enough time to do them all.

4. Take corrective actions if assumptions were shown to be incorrect.

5. Keep the complete list — never discard any assumptions, however triv-
ial.

6. Ask a colleague or a design-review panel to go over the complete list of


the assumptions at regular intervals to confirm your judgment.

1.2.11 The design process revisited

The previous sections describe some activities of the technological design


process; integration of the various activities is depicted in Figure 1.5. On
first appearance the figure seems unwieldy, but a little study reveals that it
describes the intuitive process that takes place during design. Depending
on the complexity of the design, certain functions may require a formal
action, whereas others may take place in the mind of the designer without
any formal manifestation.
As an example, consider the compilation of a model. The designer
has a certain expectation of the required item performance, and changes
the design until the expectations are fulfilled. If the item is small enough
for the designer to maintain a mental model, there is no need to have the
model in formal mathematical form. In more-complex systems, and in
particular in electro-optical systems, the model can be quite complex, and
a mathematical or numerical implementation is required.
Note the symmetry between modeling activities in the upper half of
the diagram and hardware activities in the lower half of the diagram.
There is a duality between the modeling and hardware activities, where
modeling is used to derive a theoretical solution through analysis while
hardware construction finds a physical solution. The modeling activities
Electro-Optical System Design 13

User
need Actions are indicated by solid boxes
Interpret and inputs or results by dashed boxes.
need
Predicted Proposed System
performance system model
Make Compile Detailed
assumptions model analysis

Assumptions Update Functional


model design

Compare Assumption Corrective Specification


management action

Facts Update Design


hardware choices
Confirm Build Detailed
assumptions system design
Measured Test Real
performance system system
Qualify
system
User need
satisfaction

Figure 1.5 Actions and results during the design process.

normally start before the hardware design activities, but they continue
while the design process takes place, as shown in Figure 1.2. A model is
used to derive the hardware design details, and hardware evaluation tests
are used to confirm and improve the model, in a never-ending cycle. The
parallel development of models and hardware therefore greatly increases
confidence in the system. This is especially true for systems where human
operators or free atmospheric effects are part of the system.
It should be clear that system design is not a simple one-dimensional
problem. There are many complex issues involved in an iterated design
approach, frequently requiring several experimental models before the fi-
nal design is approved. The design process normally stops when the de-
sign is certifiably demonstrated to meet the requirement. The designer is
normally not allowed to have the final say in the acceptance of the design,
but a review panel or advisors grants the final approval. It often happens
that the design process continues into production if unexpected problems
arises.
14 Chapter 1

Source Medium Sensor


Optical Medium Optical
source receiver
Support Support
equipment equipment

Interface Interface

Greater Greater
system system

Figure 1.6 A generic electro-optical system.

1.3 Electro-Optical Systems and System Design

1.3.1 Definition of an electro-optical system

An electro-optical system is defined as a collection of items utilizing optical,


physical, and electronic techniques to achieve some goal. Any physical
object or medium that affects the flow of optical energy, or the conversion
between optical and electronic energy, should be considered as part of
the electro-optical system. A generic model of an electro-optical system is
shown in Figure 1.6. The need for the separate identification of an electro-
optical system is driven by the fact that the true effect of an item at the
lowest level is often not fully appreciated. For instance, atmospheric tur-
bulence may be a major factor when determining electronic time constants
in a complex signal processor functionally far removed from the optical
units. It is only when the integrated electro-optical system is analyzed that
some subtle interdependencies can be handled in the design. Note that
the human user is often a critical part of the system, usually as part of the
receiver. This book enables the reader to perform electro-optical systems
design at this higher, integrated level.
Three major components are present:

The Source, comprising the interface with the wider-scope system-support


equipment, optical energy source, optics, and packaging. It also in-
cludes interference and spurious background sources.

The Medium, comprising all effects that in any way influence the opti-
cal signal. This may include optical fibers, atmospheric attenuation,
dust, smoke, turbulence effects, etc.

The Sensor, comprising the packaging, optics, optical energy detector, sup-
Electro-Optical System Design 15

Table 1.1 Examples of typical electro-optical systems cast into the generic format.


ŽŠȬœŽŽ”’— ›ŽŽȬœ™ŠŒŽ
¢œŽ– ’‹Ž› ˜™’Œ •’—”
–’œœ’•Ž •ŠœŽ› •’—”
˜ž›ŒŽ ‘Ž ’—›Š›Ž •ž¡ ›˜–  •ŠœŽ› ˜›  œ˜ž›ŒŽ ’œ  •ŠœŽ› ™›˜Ÿ’Žœ ‘’‘Ȭ
Š— Š’›Œ›Š ’œ ’—•žŽ—ŒŽ žœŽǯ ‘Ž Œ˜ž™•’— ˜ ™˜ Ž› ™ž•œŽœ ’—˜ Š  Ž••Ȭ
‹¢ Ž—’—Ž Ž–™Ž›Šž›Žǰ ‘Ž œ˜ž›ŒŽ Ž—Ž›¢ ’—˜ Ž’—Ž ˜™’ŒŠ• ‹ŽŠ–ǯ
Š•’žŽǰ Š— •¢’— ‘Ž ˜™’ŒŠ• ’‹Ž› ’œ ŸŽ›¢ ‘Ž ‹ŽŠ– ’ŸŽ›Ž—ŒŽ ’œ
Œ˜—’’˜—œǯ ‘Ž Ž—Ž›¢ ’–™˜›Š—ǰ ˜‘Ž› ’œŽǰ ’ŒŠŽ ‹¢ Ž¡Ž›—Š•
ŒŠ——˜ ‹Ž ’›ŽŒŽ ˜› ‘Ž œ˜ž›ŒŽ ™›ŽœŽ—œ —˜ Œ˜—œ’Ž›Š’˜—œǰ œžŒ‘ Šœ
Œ˜—Š’—Ž ’— Š ‹ŽŠ–ǰ ’ǯŽǯǰ –Š“˜› ™›˜‹•Ž–œ ˜ ‘Ž ™˜’—’— ŠŒŒž›ŠŒ¢ Š—
ž—Œ˜˜™Ž›Š’ŸŽǯ
˜ ˜‹Ȭ Žœ’—Ž›ǯ Š–˜œ™‘Ž›’Œ ž›‹ž•Ž—ŒŽ
“ŽŒœ œžŒ‘ Šœ •Š›Žœ ǭ ‘Ž ŽŽŒœǯ
œž— ŠŒ Šœ Š•œŽ Š›Žœǯ
Ž’ž– –˜œ™‘Ž›Žǯ ˜œœŽœ ‹¢ ‘Ž ’‹Ž› Œ˜—Š’—œ ‘Ž –˜œ™‘Ž›Žǯ ˜œœŽœ ‹¢
œŒŠŽ›’— ǭ Š‹œ˜›™’˜—ǯ Ž—Ž›¢ǯ –™ž›’’Žœ ’— ‘Ž œŒŠŽ›’— ǭ Š‹œ˜›™’˜—ǯ
ŽŒ›ŽŠœŽœ  ’‘ Š•’žŽǯ ’‹Ž› Š‹œ˜›‹ǰ Š— ‘Ž ‘Ž •ŠœŽ› ‹ŽŠ– ’ŸŽ›Ȭ
›˜— ™•ž–Ž œ’—Šž›Ž –˜•ŽŒž•Š› œ›žŒž›Ž ˜ Ž—ŒŽ Œ˜—Š’—œ ‘Ž
ŠŽ—žŠŽ ‹¢ ŒŠ›‹˜— ‘Ž ’‹Ž› œŒŠŽ›œ Ž—Ž›¢ǯ Ž—Ž›¢  ’‘’— •’–’œǰ ‹ž
’˜¡’Žǯ •˜žœ –Š¢ ‘Ž ’œ™Ž›œ’˜— ’— ‘Ž Š–˜œ™‘Ž›’Œ ž›‹ž•Ž—ŒŽ
ŒŠžœŽ ˜‹œŒž›Š’˜— ˜ ‘Ž ’‹Ž› ŒŠžœŽœ –ž•’Ȭ™Š‘ ŠŽŒœ ‹ŽŠ– šžŠ•’¢ ‹¢
Š›Ž Š’›Œ›Šǯ ›Š—œ–’œœ’˜— Š— –˜Ž ’ŸŽ›Ž—ŒŽǰ œŒ’—’••Š’˜—ǰ
’œ™Ž›œ’˜—ǯ Š— ‹ŽŠ–  Š—Ž›ǯ

Ž—œ˜› Š››˜  ’Ž• ˜ Ÿ’Ž  ˜— ŒŒŽ™œ ˜™’ŒŠ• œ’—Š•œ ŽŒŽ’ŸŽ› ‹ŽŠ–  ’‘ ’œ
Š œŽŽ›Š‹•Žǰ œŠ‹’•’£Ž ›˜– ‘Ž ’‹Ž›  ’‘ •’•Ž ŽŽ›–’—Ž ‹¢ ™˜’—’—
™•Š˜›–ǯ ’œŒ›’–’—ŠŽœ Ž¡Ž›—Š• ’—Ž›Ž›Ž—ŒŽ ˜› ŠŒŒž›ŠŒ¢ Š— ‘Ž ‹ŠŒ”Ȭ
‘Ž Š’›Œ›Š ›˜– ˜‘Ž› ‹ŠŒ”›˜ž— •ž¡ǯ ›˜ž— •ž¡ ŒŠžœ’—
‘˜ ˜‹“ŽŒœ ’— ‘Ž œŒŽ—Žǯ •ŽŒ›˜—’Œ ‹Š— ’‘ ’œ —˜’œŽ ’— ‘Ž ŽŽŒ˜›ǯ
—Ž›Ž›Ž—ŒŽ Š— ŽŽŒ˜› ŸŽ›¢ ‘’‘ǰ •ŽŠ’— –˜œ™‘Ž›’Œ ž›‹ž•Ž—ŒŽ
—˜’œŽ ’— ‘Ž œŽŽ”Ž› •’–’œ ˜ ‘’‘Ž› —˜’œŽ ™˜ Ž›ǯ —ŽŒŽœœ’ŠŽœ ‘Ž žœŽ ˜
œŽ—œ˜› œŽ—œ’’Ÿ’¢ǯ ‘Ž •Š›Ž ›ŽŒŽ’Ÿ’— Š™Ž›ž›Žœ
œŽŸŽ›Ž œ’£Ž ›Žœ›’Œ’˜— ˜ ›ŽžŒŽ œŒ’—’••Š’˜—
•’–’œ Žœ’— ˜™’˜—œǯ –˜ž•Š’˜— Ž™‘ǯ

port equipment, signal conditioning, functional interface with the


rest of the system, and any part of the greater system that may be
affected by items in this system. The human may be tasked to detect,
recognize, or identify objects in a scene.

Table 1.1 lists typical examples of electro-optical systems. Note how all of
the systems fit into the basic source–medium–sensor model.

1.3.2 Designing at the electro-optical-system level

The typical activities of a non-trivial, electro-optical system design include


work at the level of the individual components (source, medium, and sen-
sor) but also at the integrated system level.
The source must be characterized either by measurement or design
16 Chapter 1

choice. The signature of sources emitting optical flux must be measured,


modeled, and understood. Active sources such as lasers must be designed
to meet specific requirements. The requirement is that the properties of
the source, whatever its nature, must be well understood and quantified.
The medium must be designed to requirement or characterized if it is
not a design item (i.e., the atmosphere). It is important to quantify all key
properties of the medium that may affect the system performance.
The sensor design is normally more directly under the designer’s con-
trol than are the source and medium. The designer must select and op-
timize the optics, spectral filters, detectors, electronics, and packaging to
meet the requirements. In some instances the sensor design must integrate
tightly with other components, such as stabilized gimbal platforms, which
places severe restrictions on the optical design.
At the level of an integrated system, most design and analysis fo-
cuses on system performance. The interdependency between a compo-
nent’s properties and its effect on performance are studied and optimized.
The effect of the medium on system performance must be offset by ap-
propriate design choices in the source or sensor. These effects are only
quantifiable at the system level.
The advantages of designing at the integrated electro-optical system
level include the following: (1) User requirements can be satisfied by trad-
ing off the various items present in the system. For example, the source
output power can be reduced if the sensor can be made more sensitive.
(2) The effective performance of the electro-optical system can only be de-
termined in the context of the system because each item independently
cannot perform the system function. (3) Certain key decisions can only be
made at a system level because they influence all lower-level items. For
example, the pulse width of a laser pulse effects the laser design as well as
the sensor detector and electronics design.

1.3.3 Electro-optical systems modeling and simulation

The multi-disciplinary nature of electro-optical systems makes it very dif-


ficult to build a simple analytical model of the whole system. Comprehen-
sive modeling and large-scale simulation 2,3 are often done to create en-
vironments where system-level investigations and design can be executed
(see Appendix B). Maximum benefit can be obtained from such simulation
activities if they are tightly integrated with the hardware design process.
System-level modeling and simulation aids the designer as follows:

1. Optimal specification allocation can be achieved by analyzing the in-


Electro-Optical System Design 17

terdependencies between the various components in the system (see


Section 1.2.7.1).

2. System performance can be predicted before hardware is built and for


scenarios that may be not be easily tested in real life.

3. Operational evaluation can be simulated prior to the real-world event,


investigating and confirming the test design prior to deployment.

4. In a sufficiently accurate simulation, the simulation provides a devel-


opment environment, e.g., for image-processing algorithm or mode-
control development.

5. Parallel development of the simulation and hardware by separate teams


provides mutual insight and peer review to both teams’ work.

6. The simulation provides an easy way to perform system-level tolerance


analysis and error-budget analysis on allocated performance specifica-
tions, saving iterative hardware trials.

In large and complex systems, the system modeling and simulation


effort can be as important to the end goals as the hardware design effort,
and substantial time should be invested in this area.

1.4 Conclusion

The principles described in this chapter can significantly ease the design of
complex systems with increased confidence in the system performance. In
practice it is very seldom possible to apply all of the principles described
above due to labor or schedule constraints. It is still useful to consider the
complete process and apply certain aspects in a particular situation. An
analysis of this nature is never complete, but the basic approach outlined
here should be applicable to most electro-optical design situations.

Bibliography
[1] Incose, “Systems Engineering Handbook,”
http://www.incose.org/ProductsPubs/products/sehandbook.aspx.

[2] DIRSIG Team, “Digital Imaging and Remote Sensing Image Generation
(DIRSIG),” http://www.dirsig.org/.

[3] Willers, M. S. and Willers, C. J., “Key considerations in infrared sim-


ulations of the missile-aircraft engagement,” Proc. SPIE 8543, 85430N
(2012) [doi: 10.1117/12.974801].
Chapter 2
Introduction to Radiometry

He who loves practice without theory


is like the sailor who boards ship without a rudder and compass
and never knows where he may cast.
Leonardo da Vinci

2.1 Notation

In this book the difference operator ‘d’ is used to denote ‘a small quantity
of ...’. This ‘small quantity’ of one variable is almost always related to
a ‘small quantity’ of another variable in some physical dependency. For
example, irradiance is defined as E = dΦ/dA, which means that a small
amount of flux dΦ impinges on a small area dA, resulting in an irradiance
of E. ‘Small’ is defined as the extent or domain over which the quantity, or
any of its dependent quantities, does not vary significantly. Because any
finite-sized quantity varies over a finite-sized domain, the d operation is
only valid over an infinitely small domain dA = limΔA→0 ΔA.
The difference operator, written in the form of a differential such as
E = dΦ/dA, is not primarily meant to mean differentiation in the math-
ematical sense. Rather, it is used to indicate something that can be inte-
grated (or summed). In this book, differentiation is seldom done, whereas
integration is done on almost every page.
In practice, it is impossible to consider infinitely many, infinitely small
domains. Following the reductionist approach, any real system can, how-
ever, be assembled as the sum of a set ofthese small domains, by integra-
tion over the physical domain as in A = dA. Hence, the ‘small-quantity’
approach proves very useful to describe and understand the problem,
whereas the real-world solution can be obtained as the sum of a set of
such small quantities. In almost all of the cases in this book, it is implied
that such ‘small-quantity’ domains will be integrated (or summed) over
the (larger) problem domain.

19
20 Chapter 2

Photon rates are measured in quanta per second. The ‘second’ is an SI


unit, whereas quanta is a unitless count: the number of photons. Photon
rate therefore has units of [1/s] or [s−1 ]. This form tends to lose track
of the fact that the number of quanta per second is described. The book
may occasionally contain units of the form [q/s] to emphasize the photon
count. In this case, the ‘q’ is not a formal unit, it is merely a reminder of
‘counts.’ In dimensional analysis (Section 10.6) the ‘q’ is handled the same
as any other unit.
Unless otherwise stated, temperatures are in kelvin [K] (note that the
use of kelvin as temperature is without the word ‘degrees’).

2.2 Introduction

Electromagnetic radiation can be modeled as a number of different phe-


nomena: rays, electromagnetic waves, wavefronts, or particles. All of these
models are mathematically related. The appropriate model to use depends
on the task at hand. Either the electromagnetic wave model (developed by
Maxwell) or the particle model (developed by Einstein) are used when
most appropriate. The electromagnetic spectrum is shown in Figure 2.1.
The electromagnetic wave energy is propagated as alternating, per-
pendicular, sinusoidal electric and magnetic fields [Figure 2.2(a)]. The
electric and magnetic fields are related by Maxwell’s equations 1 such that
a spatially varying electric field causes the magnetic field to change over
time (and vice versa). The interaction between the two fields results in the
propagation of the wave through space, away from the source. In the near
field, close to the source, the wave diverges spatially, and there is a 90-deg
phase shift between the electric and magnetic fields. In the far field, the
wave has negligible divergence, and the magnetic and electric fields are in
phase.
In terms of the wave model [Figure 2.2(c) and (d)], light can be con-
sidered a transverse wave propagating through a medium. The wave has a
wavelength equal to the distance over which the wave’s shape repeats. The
distance along a wave is also measured in phase, where the signal repeats
once with every 2π rad phase increase (phase angle is derived from the
rotating phasor model of a sinusoidal generator). The wave’s wavelength
λ in [m] is related to the frequency ν in [Hz], by its propagation speed c
in [m/s] and the medium’s index of refraction n by c = nλν. For light in
vacuum, the speed c is a universal physical constant.
The wavefront is the wave’s spatial distribution at a given wave phase
angle. An optical ray can be modeled as a vector such that it is perpen-
Introduction to Radiometry 21

Electromagnetic Radiation
SI Units Sources of Wien law Common Frequency Wavelength Size of one SI Units
frequency energy temperature name wavelength wavelength
[K] [Hz] [m]
24
yottahertz 10

Very high energy rays


quasars 10
12
10
-15 femtometer
cosmic rays 10
23 atomic proton
11 -14
super novas 10 22 10
10
10 -13
10 10
zettahertz gamma rays 1021
10
9
10
-12 picometer
20
radioactive 10
8 -11
elements 10 19 10 hydrogen atom
hard X rays 10
7 -10
10 10 water molecule
X-rays

exahertz 18
10
x ray 10
6
10
-9 nanometer
sources
soft X rays 1017 protein
5 -8
10 16
10
10
the sun 4 ultra violet -7
10 10 virus
petahertz visible 10
15
light bulb 10
3
10
-6 micrometer
14
10 bacteria
human 2 infrared -5
10 13
10
body 10
1 -4 living cell
10 10
terrahertz 10
12
Microwaves

millimeter
10
0 10
-3 millimeter
radar 10
11 pinhead
radar transmitter -1 -2
10 10
radar 10
microwave 10 tennis ball
-2 -1
oven 10 10
gigahertz UHF TV 10
9

10
-3 10
0 meter
VHF TV 10
8
radio -4 FM radio 1 house
Radiowaves

10 10
transmitters 7
SW radio 10
-5 2
10 10 Eiffel Tower
megahertz 10
6
-6 MW radio 3 kilometer
10 10
5
10
-7 10
4 New York City
10 LW radio
PC monitors 104
-8 5
10 10 British Isles
kilohertz VLW 3
Very low freq

10
10
-9 radio 10
6 megameter
2
10
mains power 10
7 Earth
1
10
hertz 10
0 © CJ Willers 2005

Figure 2.1 The electromagnetic spectrum.


22 Chapter 2

Electric
field
Magnetic
field
Direction
of travel
Artist’s impression 3
Wavelength
(a) (b)

Phase
0 2p 4p 6p 8p 10p

Optical ray Optical rays

Spherical
Planar wavefront
Wavelength wavefront
(c) (d)
Figure 2.2 Simple light models: (a) electromagnetic wave, (b) photon wave packet, (c) far-
field, plane wave, and (d) near-field spherical wave.

dicular to the wavefront. The wavefront and ray models are mostly used
in aberration calculations during optical design and when laying out an
optical system.
The photon 2 is a massless elementary particle and acts as the force
carrier for the electromagnetic wave. Photon particles have discrete en-
ergy quanta proportional to the frequency of the electromagnetic energy,
Q = hν = hc/λ, where h is Planck’s constant. Attempting to fuse the par-
ticle model and the wave model, consider the photon as a spatially limited
electromagnetic disturbance (a wave packet) propagating through space.
In this book the photon is sometimes graphically illustrated with the sym-
bol shown in Figure 2.2(b). The figure also shows an artist’s impression of
the photon as a wave packet.
Radiometry is the measurement and calculation of electromagnetic flux
transfer 4–10 for systems operating in the spectral region ranging from ul-
traviolet to microwaves. Indeed, these principles can be applied to elec-
tromagnetic radiation of any wavelength. This book only considers ray-
based 9 radiometry for incoherent radiation fields.
Introduction to Radiometry 23

2.3 Radiometry Nomenclature

2.3.1 Definition of quantities

The nomenclature is defined on the ISO/CIE and ANSI lighting stan-


dards 11–14 and applied in various texts. 4,7,9,10 Radiometric quantities are
denoted by single-character symbols (Roman or Greek letters), such as Q,
Φ, w, I, M, E, and L. These quantities are defined in Table A.1 and shown
in Figure 2.3 (expanded from Pinson 5 ) and Table 2.1. Radiometry nomen-
clature was revised during the 1960s and early 1970s with the result that
some of the older textbooks 15 are left using the older notation (P, W, N,
H).
Radiometric quantities can be defined in terms of three different but
related units: radiant power (watts), photon rates (quanta per second),
or photometric luminosity (lumen). Photometry is radiometry applied to
human visual perception. 5,14,16–18 The conversion from radiometric to pho-
tometric quantities is covered in more detail in Section 2.10. It is important
to realize that the underlying concepts are the same, irrespective of the nature of
the quantity. All of the derivations and examples presented in this book are
equally valid for radiant, photon, or photometric quantities.
Flux is the amount of optical power, a photon rate, or photometric
luminous flux, flowing between two surfaces. There is always a source area
and a receiving area, with the flux flowing between them. All quantities
of flux are denoted by the symbol Φ. The units are [W], [q/s], or [lm],
depending on the nature of the quantity.
Irradiance (areance) is the areal density of flux on the receiving surface
area, as shown in Figure 2.3(a). The flux flows inward onto the surface with
no regard to incoming angular density. All quantities of irradiance are
denoted by the symbol E. The units are [W/m2 ], [q/(s·m2 )], or [lm/m2 ],
depending on the nature of the quantity.
Exitance (areance) is the areal density of flux on the source surface area,
as shown in Figure 2.3(b). The flux flows outward from the surface with
no regard to angular density. The exitance leaving a surface can be due to
reflected light, transmitted light, emitted light, or any combination thereof.
All quantities of exitance are denoted by the symbol M. The units are
[W/m2 ], [q/(s·m2 )], or [lm/m2 ], depending on the nature of the quantity.
Intensity (pointance) is the density of flux over solid angle, as shown in
Figure 2.3(c). The flux flows outward from the source with no regard for
surface area. Intensity is denoted by the symbol I. The human perception
of a point source (e.g., a star at long range) ‘brightness’ is an intensity
24 Chapter 2

E= dΦ
dA [W/m2 ] M= dΦ
dA [W/m2 ]

Flux onto surface Flux from surface


dΦ from hemisphere dΦ into hemisphere
[W] [W]

Surface area dA [m2 ] Surface area dA [m2 ]


(a) (b)
d2 Φ
I= dΦ
dω [W/sr] L= dAd cos θω [W/(m2 ·sr)]

Solid angle Solid angle


θ dω [sr]
dω [sr]
Flux from a point Flux through area
dΦ dΦ
into solid angle into solid angle
[W] [W]

Area in space dA [m2 ]


(c) (d)
Figure 2.3 Graphic representation of radiometric quantities: (a) irradiance, (b) exitance,
(c) intensity, and (d) radiance.

Table 2.1 Nomenclature for radiometric quantities.

Basic Quantity Radiant Photon Photometry


Energy Radiant energy Photon count Luminous energy
Qe Q p ,Qq Qν
joule [J] photon count [q] lumen-second [lm·s]
Flux Radiant flux photon flux Luminous flux
Φe Φ p ,Φ q Φν
watt [W] photon rate [q/s] lumen [lm]
Exitance Radiant exitance Photon exitance Luminous exitance
Me M p ,Mq Mν
[W/m2 ] [q/(s·m2 )] [lm/m2 ]
Intensity Radiant intensity Photon intensity Luminous intensity
Ie I p ,Iq Iν
[W/sr] [q/(s·sr)] candela=cd=[lm/sr]
Radiance Radiance Photon radiance Luminance
Le L p ,L q Lν
[W/(m2 ·sr)] [q/(s·m2 ·sr)] nit=nt=[lm/(m2 ·sr)]
Irradiance Radiant irradiance Photon irradiance Illuminance
Ee E p ,Eq Eν
[W/m2 ] [q/(s·m2 )] lux=lx=[lm/m2 ]
Subscript e p or q ν
Introduction to Radiometry 25

measurement. The units are [W/sr], [q/(s·sr)], or [lm/sr], depending on


the nature of the quantity.
Radiance (sterance) is the density of flux per unit source surface area
and unit solid angle, as shown in Figure 2.3(d). Radiance is a property
of the electromagnetic field irrespective of spatial location (in a lossless
medium). For a radiating surface, the radiance may comprise transmitted
light, reflected light, emitted light, or any combination thereof. The field’s
radiance applies anywhere in space, also on the receiving surface (see Sec-
tion 2.6.1). The source must have a nonzero size. All radiance quantities
are denoted by the symbol L. The human perception of ‘brightness’ of a
large surface can be likened to a radiance experience (beware of the nonlin-
ear response in the eye, however). The units are [W/(m2 ·sr)], [q/(s·m2 ·sr)],
or [lm/(m2 ·sr)], depending on the nature of the quantity.
When it is necessary to indicate the nature of the interaction of the flux
with a physical entity, this may be done by appending subscripts. The flux
may be incident Φi , reflected Φr , absorbed Φa , transmitted Φt , or emitted
(exitant) Φm . This notation is not standard practice and should be duly
documented when used.

2.3.2 Nature of radiometric quantities

The unit of the quantity is denoted by subscripting the quantity symbol


by either an e (radiant energy units [W]), p, q (photon rate units [q/s]), or
ν (photometric units [lm]), as indicated in Table 2.1. If the nature of the
quantity is clear from the context of usage, the subscript is not used, i.e.,
if all calculations are performed in photon rate quantities, no subscript is
used because the dimensional units will indicate [q/s]. It is advisable to
use the subscripts whenever the context could be misleading. In this book,
if no subscripts are used, and the context is not clear, radiant units [W] are
assumed. Each of these quantities are briefly described in Table 2.1.

2.3.3 Spectral quantities

Three spectral domains are commonly used: wavelength λ in [m], fre-


quency ν in [Hz], and wavenumber ν̃ in [cm−1 ] (the number of waves that
will fit into a 1-cm length). Spectral quantities indicate an amount of the
quantity within a small spectral width dλ around the value of λ: it is a
spectral density. Spectral density quantity symbols are subscripted with a
λ or ν, i.e., Lλ or Lν . The dimensional units of a spectral density quantity
are indicated as [(µm)−1 ] or [(cm−1 )−1 ], i.e., [W/(m2 ·sr·µm)].
The relationship between the wavelength and wavenumber spectral
26 Chapter 2

10000
Wavenumber [cm-1]
8000
Dλ1 Dλ2
6000

Dν1
4000

Dν2
2000

0
0 2 4 6 8 10 12
Wavelength [mm]
(a)
0
Dν/Dλ [cm-1/mm]

2000

4000

6000

8000

10000
0 2 4 6 8 10 12
Wavelength [mm]
(b)
Figure 2.4 The relationship between (a) wavelength and wavenumber, and (b) dν̃/dλ versus
wavelength.

domains is ν̃ = 104 /λ, where λ is in units of µm. The conversion of a


spectral density quantity such as [W/(m2 ·sr·cm−1 )] requires the deriva-
tive, dν̃ = −104 dλ/λ2 = −ν̃2 dλ/104 . The derivative relationship converts
between the spectral widths, and hence the spectral densities, in the two
respective domains. The conversion from a wavelength spectral density
quantity to a wavenumber spectral density quantity is dLν̃ = dLλ λ2 /104 =
dLλ 104 /ν̃2 .
The relationships between wavelength, wavenumber, and the deriva-
tive Δν̃/Δλ ≈ dν̃/dλ are shown in Figure 2.4. The top graph clearly illus-
trates how a constant spectral width in the wavelength domain translates
to a nonconstant spectral width in the wavenumber domain.
Spectral quantities denote the amount in a small spectral width dλ
around a wavelength λ. It follows that the total quantity over a spec-
tral range can be determined by integration (summation) over the spectral
range of interest:
 λ2
L= Lλ dλ. (2.1)
λ1
Introduction to Radiometry 27

The above integral satisfies the requirements of dimensional analysis (see


Chapter 10) because the units of Lλ are [W/(m2 ·sr·µm)], whereas dλ has
the units of [µm], and L has units of [W/(m2 ·sr)].

2.3.4 Material properties

The flux incident on an object (gas, liquid, or solid) can be reflected, ab-
sorbed, or transmitted so that 19

Φincident = Φabsorbed + Φtransmitted + Φreflected , (2.2)

and, therefore

1 = α + τ + ρ, (2.3)

where α represents the absorbed fraction, τ the transmitted fraction, and


ρ the reflected fraction. Material properties are described in more detail in
other publications. 4,20–22
Some texts advocate the terminology where the suffix –ance is used
for the characteristics of a specific sample (such as ‘transmittance of this
filter’), and the suffix –ivity is used for the intrinsic properties of (pure)
materials (such as ‘reflectivity of gold’). This terminology is not standard-
ized 20 and is not strictly followed in this book.

2.4 Linear Angle

Consider the two different linear angles  ABC or  ABD in Figure 2.5. Con-
sider the angle  ABC: project the points A and C onto the circle to obtain
the points A and C  . Likewise, project the points A and D onto the circle
to obtain the points A and D  . Hence, the direction from B to points A, C,
and D, irrespective of distance, are projected onto a circle with radius r.
From the definition of linear angle, the equal value of both these angles is
given by θ = s/r in radians, where s is the arc length between the projected
points A and C  or D  .
The algorithm for linear angle measurement is therefore: (1) project
the directions of A, C, and D as seen from B onto a circle centered on B,
(2) measure the arc length between the projected points, and (3) divide the
arc length by the circle’s radius.
For a full revolution, the arc length is the circumference of a circle
or 2πr, hence the linear angle of one revolution is 2π rad. A right angle
(90 deg) has an arc length of 2πr/4, leading to a linear angle of π/2 rad.
28 Chapter 2

A’

s
r

θ
B C’ D’
C D

Figure 2.5 Linear angle.

2.5 Solid Angle

2.5.1 Geometric and projected solid angle

The concept of linear angle can be extended to three dimensions by ex-


tending the linear angle algorithm to consider area (instead of arc length)
and a sphere’s radius squared (instead of the radius of a circle). Hence,
solid angle is defined as the projected area A of a surface divided by the
square of the distance R to the surface, ω = A cos θ/R2 . Linear and solid
angles are related: the first is a two-dimensional angle (n = 2) defined in
terms of length (linear in Euclidian distance, n − 1 = 1) and the second is
a three-dimensional angle (n = 3) defined in terms of area (quadratic in
Euclidian distance, n − 1 = 2). There are two definitions of solid angle in
radiometry, with a critical difference in meaning and implication: geomet-
ric solid angle and projected solid angle. The construction of these angles
is shown in Figure 2.6.
The geometric solid angle ω of any arbitrary surface P from the refer-
ence point is given by
 P 2
d P cos θ1
ω= , (2.4)
R2

where d2 P cos θ1 is the projected surface area of the surface P in the direc-
tion of the reference point, and R is the distance from d2 P to the reference
point. The integral is independent of the viewing direction (θ0 , α0 ) from
Introduction to Radiometry 29

P P
d2P d2P

θ1 θ1
(θ1,α1) (θ1,α1)
R R
θ0 θ0
(θ0,α0) (θ0,α0)

No weighting Reference area


α0 weighted by cos θ0 α0
effect with cos θ0

Reference point Reference area dA0


(a) (b)

Figure 2.6 Solid angle definitions: (a) geometric solid angle, and (b) projected solid angle.

the reference point. Hence, a given area at a given distance will always
have the same geometric solid angle irrespective of the direction of the
area.
The projected solid angle Ω of any arbitrary surface P from the refer-
ence area dA0 is given by
 P 2
d P cos θ0 cos θ1
Ω= , (2.5)
R2

where d2 P cos θ1 is the projected surface area of the surface P in the direc-
tion of the reference area, and R is the distance from d2 P to the reference
area. The integral depends on the viewing direction (θ0 , α0 ) from the ref-
erence area, by the projected area (dA0 cos θ0 ) of dA0 in the direction of
d2 P. Hence, a given area at a given distance will always have a different
projected solid angle in different directions. The calculation of solid angles
can be seen as a form of spatial normalization.

2.5.2 Geometric solid angle of a cone

Consider the left side of Figure 2.7, where a hemisphere with radius r is
constructed around the origin of the coordinate system. The center of the
hemisphere is located at the origin. The geometric solid angle ω, sub-
tended at the origin, of any arbitrary surface P is given by
P
ω= , (2.6)
r2
where P is the projection of the surface P onto a sphere of radius r, as
shown in Figure 2.7. The dimensional unit of the solid angle is [m2 /m2 ],
30 Chapter 2

Geometric solid angle Projected solid angle



dθ hdα rdθ Normal
2 vector
dP r to dA0
2
h dP
α
θ
ω Point Area dA0 Ω
α θ
2
dP h
r 2
dP
rdθ hdα dθ

Figure 2.7 Geometric solid angle ω and projected solid angle Ω of a cone.

indicated with [sr], where the numerator is area, and the denominator is
radius squared.
Note that the geometric solid angle is independent of the direction of
the true area P; only the projected area P is relevant.
For a cone with half-apex angle of Θ (the full-apex angle is therefore
2Θ), the geometric solid angle can be derived as shown below. A small
portion of the projected surface area d2 P can be written as

d2 P = rdθ × hdα
= r2 dθ sin θdα. (2.7)

Apply Equation (2.6) and integrate over 0 ≤ θ ≤ Θ and 0 ≤ α ≤ 2π


rad to obtain
 2π  Θ 2
r dθ sin θdα
ω =
0 0 r2
 Θ
= 2π sin θdθ
0
= 2π(1 − cos Θ)
 
2 Θ
= 4π sin . (2.8)
2

For Θ = π/2 rad, the cone covers a full hemisphere, and the solid
angle is 2π sr; for Θ = π rad, the full sphere has a solid angle of 4π sr.
Introduction to Radiometry 31

2.5.3 Projected solid angle of a cone

The projected solid angle is calculated relative to a small surface dA0 lo-
cated at the origin (not a single point as for the geometric solid angle). The
small surface dA0 has a vector normal to the surface, as shown on the right
side of Figure 2.7.
The projected area P is weighted with a factor cos θ, where θ is the
angle between the normal vector of the small surface dA0 and the direction
to the projected area P .
The contribution of the projected area P to the solid angle therefore
depends on the direction of the projected area P , relative to the normal
vector of the area dA0 . Restated, the direction of the projected area P
significantly determines its contribution to the solid angle. The term ‘pro-
jected solid angle’ stems from the fact that the projection of the small area
dA0 at the origin weights the contribution of the projected area P . The
reason for doing this is described in Section 2.6.
The projected solid angle is calculated by considering a small area on
the sphere as follows:
d2 P = rdθ × hdα
= r2 dθ sin θdα. (2.9)

Weigh the surface area with a factor cos θ and integrate over 0 ≤ θ ≤ Θ
and 0 ≤ α ≤ 2π rad to obtain
 2π  Θ
cos θr2 dθ sin θdα
Ω = (2.10)
0 0 r2
 Θ
= 2π cos θ sin θdθ
0
= π sin Θ. 2
(2.11)

Therefore, for a cone with half-apex angle of Θ, the projected solid


angle is given by
Ω = π sin2 Θ. (2.12)

For Θ = π/2 rad, the cone covers a full hemisphere, but the projected
solid angle is only π sr, one-half of the geometric solid angle. This dif-
ference is due to the fact that the area d2 P is weighted by the cosine of
θ.
For cone half-apex angles Θ < 0.2 rad, the geometric solid angle and
projected solid angle are numerically similar because cos θ is near unity
32 Chapter 2

7
6 2p Geometrical
Apex
solid angle
Solid angle [sr]

5 angle

3 p
2 Projected
solid angle
1
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
Half-apex angle [rad]

Figure 2.8 Comparison of geometrical and projected solid angle of a cone.

and the projection effect is negligible. Figure 2.8 shows the geometrical
and projected solid angle magnitude for a cone.

2.5.4 Geometric solid angle of a flat rectangular surface

The geometric solid angle of a rectangular flat surface, with dimensions


W and D, as seen from a reference point centered above the surface (see
Figure 2.9), is determined by the integral of the projected area of a small
elemental area cos θ dd dw across the full size of the surface:
 
dw dd cos θ
ωs =
W D R2
 
dw dd cos3 θ
=
W D H2
   
dw dd H 3
= 2
W D H R
   3
dw dd H
= √ , (2.13)
W D H
2
w 2 + d2 + H 2
where H is the reference point height above the surface. The integral is
performed along the W and D dimensions with increments of dw and dd.
The slant range between the reference point and the elemental area dd × dw
is R = H/ cos θ.

2.5.5 Projected solid angle of a flat rectangular surface

The projected solid angle of a rectangular flat surface, as seen from a ref-
erence area centered above the surface and parallel to the surface, is de-
termined by the integral of the projected area of a small elemental area
Introduction to Radiometry 33

Reference point

H θ

dw
dd
W

D
Figure 2.9 Solid angle of a centered flat surface.

cos θ dd dw, weighted by an additional cos θ across the full size of the sur-
face:
 
dw dd cos2 θ
Ωs =
W D R2
 
dw dd cos4 θ
=
W D H2
   
dw dd H 4
= 2
W D H R
   4
dw dd H
= √ . (2.14)
W D H
2
w 2 + d2 + H 2

Note the cos4 θ term in the integral. At large θ, i.e., when two large
surfaces are located relatively close together, the contribution of the ex-
treme areas of the surfaces reduce considerably compared to the central
areas. In optical systems, the cos4 θ effect results in lower image flux to-
ward large field angles — the image flux is not constant across the whole
image field.

2.5.6 Approximation of solid angle

Consider a flat plate with dimensions W and D, tilted at an angle θ with


respect to the view direction. It is tempting to simplify the calculation of
34 Chapter 2

S

H 2 − r2
r
R
θ
h
t s α P

H = t+s+h

H 2 + r2

Figure 2.10 Projected area of a sphere.

the flat plate’s solid angle by considering θ constant, such that


 
cos θ dw dd
ω =
W D R2
cos θ W D
= . (2.15)
R2

This approximation may be acceptable only if R2 is much larger than


the projected area, such that cos θ approaches a constant value over the full
projected area. If the value of cos θ varies appreciably across the projected
area, its effect must be included in the integral, as shown in the previous
section. Section E.2 investigates the accuracy of the approximation.

2.5.7 Projected area of a sphere

It is commonly assumed that the projected area of a sphere is a circle


with the same diameter as the sphere. This is only true when the sphere
is observed from infinity. At any closer range, the projected area of the
sphere is determined by how much of the sphere is visible. Figure 2.10
shows that the projected area at close range is given by πR2 , whereas the
area of a disk is given by πr2 , where R < r near the sphere. The solid
angle of the sphere at close range can indeed be larger than the solid angle
calculated from a disk at the center of the sphere, as shown in the next
section.
Introduction to Radiometry 35

2.5.8 Projected solid angle of a sphere

When observing sphere S from reference point P in Figure 2.10, only a


portion of the sphere is visible. With P closer to S , less of the sphere
is visible. As the distance between P and S approaches infinity, a full
hemisphere is observed.
The projected solid angle of a cone is given by Ω = π sin2 θ, where θ is
the half-apex angle. From Figure 2.10 it is evident that for the sphere, the
solid angle of the cone defined by the visible portion of the sphere (with
half-apex angle θ) is given by

Ωs = π sin2 (arcsin r/H ) = π r2 /H 2 . (2.16)

The solid angle of the cone, defined by the disk at the center of the
sphere with normal vector pointed to P and with half-apex angle α, is
given by

Ωd = π sin2 (arcsin r/ H 2 + r2 ) = π r2 /( H 2 + r2 ). (2.17)

The ratio of sphere solid angle to disk solid angle is then

Ωs r2 + H 2
= . (2.18)
Ωd H2
This ratio is always greater than one, hence the solid angle when viewing
a sphere from a distance is always greater than the solid angle of a disk
at the center of the sphere (both referenced from P ). For increasing H the
ratio approaches unity, but at close range the ratio increases.
For the smallest value of H (H = r) the ratio is two. At this lo-
cation, the surface of the sphere, the projected solid angle of the sphere
is π because the sphere appears locally as an infinitely large, almost-
planar surface. At the same location, the disk-projected solid angle is
π sin2 (π/4) = π/2 sr.

2.6 Radiance and Flux Transfer

2.6.1 Conservation of radiance

The conservation of radiance 9 is an important, fundamental concept in the


understanding of radiometry. In this section, the ‘small area’ (derivative)
notation is used for flux, area, and solid angle, with the understanding
that it applies to small elemental-source and receiver areas. The general
solution for arbitrary source and receiver areas is addressed in Section 6.6.
36 Chapter 2

θ0

dA0
R01 θ1

dA1

Figure 2.11 Geometrical construction for radiative flux between two elemental areas.

Radiance is conserved for flux propagation through a lossless optical


medium. 7 Consider the construction in Figure 2.11: two elemental areas
dA0 and dA1 are separated by a distance R01 , with the angles between
the normal vector of each surface and the line of sight given by θ0 and
θ1 . A total flux of Φ is flowing through both the surfaces. Then, from the
definition of radiance in Section 2.3, radiance values at both surfaces are
d2 Φ
L0 = (2.19)
dA0 cos θ0 dΩ1
and
d2 Φ
L1 = . (2.20)
dA1 cos θ1 dΩ0

From the definition of solid angle,


dA1 cos θ1
dΩ1 = (2.21)
R201
and
dA0 cos θ0
dΩ0 = . (2.22)
R201
The same flux flows through both surfaces, and hence the flux in Equa-
tions (2.19) and (2.20) is the same. After mathematical manipulation, it
follows that L0 = L1 . It is important to note that there are no restrictions
on the location of either dA0 or dA1 ; it follows that radiance is spatially in-
variant in any plane in the system (provided that the field is not affected by
an object or medium losses). Radiance is a property of the electromagnetic
field itself; it will be affected by geometrical constructs in its surroundings
but is not dependent on such constructs.
Introduction to Radiometry 37

As light propagates through mediums with different refractive indices


n such as air, water, glass, etc., the entity called basic radiance, 6,7,9 defined
by L/n2 , is invariant. It can be shown that for light propagating from a
medium with refractive index n1 to a medium with refractive index n2 , the
basic radiance is conserved:
L1 L
2
= 22 . (2.23)
n1 n2
Most mediums are not lossless; a medium may attenuate the propagating
flux by removing flux from the beam through scattering or absorption,
or the medium may add flux to the beam through scattering or thermal
exitance. The effects of a lossy medium are discussed in more detail in
Section 4.2.

2.6.2 Flux transfer through a lossless medium

A lossless medium is defined as a medium with no losses between the


source and the receiver, such as a complete vacuum. This implies that no
absorption, scattering, or any other attenuating mechanism is present in
the medium.
Combining Equations (2.19) and (2.21) yields
d2 Φ R201
L= , (2.24)
dA0 cos θ0 dA1 cos θ1
or
L dA0 cos θ0 dA1 cos θ1
d2 Φ = . (2.25)
R201

For a lossless medium, the flux flowing between the source and re-
ceiver is given by the product of the (invariant) radiance and the projected
areas of the source and receiver, divided by the square of the distance
between the areas. Note that on the right of the equation, there is only
one radiometric quantity, L; the remaining quantities are all geometric
(nonradiometric) quantities. Radiometry is therefore as much a study of
geometry as it is of optical flux. Equation (2.25) pertains to the flux flow-
ing through the two surfaces; it does not yet include the effects of source
emissivity or receiver absorption (see Section 3.2).
Equation (2.25) can be used to derive all radiometric quantities as fol-
lows:
Irradiance is derived as (note the cos θ1 term)
dΦ L dA0 cos θ0 cos θ1
E= = = L Ω0 cos θ1 . (2.26)
dA1 R201
38 Chapter 2

Intensity is derived as
dΦ R201
I= = L dA0 cos θ0 cos θ1 . (2.27)
dA1

Exitance is derived as (note the cos θ0 term)


dΦ L dA1 cos θ1 cos θ0
M= = = L Ω1 cos θ0 . (2.28)
dA0 R201

A Lambertian source radiates into the full hemisphere with projected


solid angle π, so that, for Lambertian sources, Equation (2.28) reduces to
M = L π (see Section 2.7).
See Equation (10.6) for a discussion on manipulating the dimensional
units of these equations.

2.6.3 Flux transfer through a lossy medium

Denote the medium transmittance by τ01 to indicate that it is the transmit-


tance between location 0 and location 1. The total flux flowing from the
source through the receiver area is given by
L0 dA0 cos θ0 dA1 cos θ1 τ01
d2 Φ1 = , (2.29)
R201

where Φ1 is the flux at dA1 , L0 is the radiance at dA0 , and τ01 is the medium
transmittance between the two elemental areas. In this case radiance is not
invariant because of the medium loss. This simple model is accurate for
most cases where the path radiance contribution is negligible compared to
the flux in the field radiance.

2.6.4 Sources and receivers of arbitrary shape

Equation (2.25) calculates the flux flowing between two infinitely small
areas. The flux flowing between two arbitrary shapes can be calculated
by integrating Equation (2.25) over the source surface and the receiving
surface. In the general case, the radiance L cannot be assumed constant
over A0 , introducing the spatial radiance distribution L(dA0 ) as a factor
into the spatial integral. 9 Likewise, the medium transmittance between
any two areas dA0 and dA1 varies with the spatial locations of dA0 and
dA1 — hence τ01 (dA0 , dA1 ) should also be included in the spatial integral.
The integral can be performed over any arbitrary shape, as shown
in Figure 2.12, supporting the solution to complex geometries. Clearly
Introduction to Radiometry 39

A0 A1

θ0
θ1
dA0 R01
τ01 dA1

Figure 2.12 Radiative flux between areas of arbitrary shape.

matters such as obscuration and occlusion should be considered when


performing this integral:
 
L(dA0 ) dA0 cos θ0 dA1 cos θ1 τ01 (dA0 , dA1 )
Φ= . (2.30)
A0 A1 R201

2.6.5 Multi-spectral flux transfer

The optical power leaving a source undergoes a succession of scaling or


‘spectral filtering’ processes as the flux propagates through the system,
as shown in Figure 2.13. This filtering varies with wavelength. Exam-
ples of such filters are source emissivity, atmospheric transmittance, opti-
cal filter transmittance, and detector responsivity. The multi-spectral filter
approach described here is conceptually simple but fundamental to the
calculation of radiometric flux. Consider the flow of flux from the source
to the sensor:

1. The most fundamental description of a thermal radiator source state is


given by the temperature of the radiating surface and a mathematical
function, called Planck’s law. At a given temperature, Planck’s law sets
an absolute limit to the source radiance (see Section 3.1).

2. The spectral emissivity of the source acts as a filter by limiting the


source radiance. Emissivity can be expressed as a spectral variable be-
tween zero and unity (see Section 3.2).

3. The spectral transmittance of the medium or atmosphere acts as a spec-


tral filter (see Chapter 4).
40 Chapter 2

Multi-spectral flux propagating Det


ecto
r
from the source to the sensor.
Calculate flux at a single wave-
length, then add the resultant Sen
sor
filte
flux across all wavelengths. r
τs
λ
Atm
osp
here
τa r
λ so
Em
issiv en
ity S


λ
The
rma
l rad m
iato iu
r
ed
L dl λ M

λ rc
e
u
o
S

Figure 2.13 Describing the electro-optical system as a thermal source and a series of spec-
tral filters.

4. In some cases the source radiance is reflected from a surface, such as


sunlight reflected from the surface of an object. The spectral nature of
the reflectance can be considered as a spectral filter (see Section 3.4).

5. The spectral transmittance of the optics/filter in the sensor acts as a


filter (see Section 6.4).

6. The detector’s spectral response can be interpreted as a spectral filter


(see Chapter 5).

7. The detector converts the optical flux to an electrical signal by the scalar
value of its responsivity (see Chapter 5).

Extend Equation (2.29) for multi-spectral calculations by noting that


over a spectral width dλ the radiance is given by L = Lλ dλ:
L0λ dA0 cos θ0 dA1 cos θ1 τ01 dλ
d3 Φ λ = , (2.31)
R201
where d3 Φλ is the total flux in [W] or [q/s] flowing in a spectral width dλ at
wavelength λ, from a radiator with radiance L0λ with units [W/(m2 ·sr·µm)]
Introduction to Radiometry 41

and projected surface area dA0 cos θ0 , through a receiver with projected
surface area dA1 cos θ1 at a distance R01 , with a transmittance of τ01 be-
tween the two surfaces. The transmittance τ01 now includes all of the
spectral variables in the path between the source and the receiver.
To determine the total flux flowing from elemental area dA0 through
dA1 over a wide spectral width, divide the wide spectral band into a large
number N of narrow widths Δλ at wavelengths λn and add the flux for all
of these narrow bandwidths together as follows:
N  
L0λn dA0 cos θ0 dA1 cos θ1 τ01λn Δλ
d Φ= ∑
2
. (2.32)
n =0 R201

By the Riemann–Stieltjes theorem in reverse, if now Δλ → 0 and N →


∞, the summation becomes the integral
 λ2
L0λ dA0 cos θ0 dA1 cos θ1 τ01λ dλ
d2 Φ = . (2.33)
λ1 R201

Equation (2.33) describes the total flux at all wavelengths in the spec-
tral range λ1 to λ2 passing through the system. This equation is developed
further in Chapter 7.

2.7 Lambertian Radiators and the Projected Solid Angle

“A Lambertian source is, by definition, one whose radiance is completely


independent of viewing angle.” 7 Many (but not all) rough and natural sur-
faces produce radiation whose radiance is approximately independent of
the angle of observation. These surfaces generally have a rough texture
at microscopic scales. Planck-law blackbody radiators are also Lamber-
tian sources (see Chapter 3.1). Any Lambertian radiator is completely de-
scribed by its scalar radiance magnitude only, with no angular dependence
in radiance.
The relationship between the exitance and radiance for such a Lam-
bertian surface can be easily derived. If the flux radiated from a Lamber-
tian surface Φ [W] is known, it is a simple matter to calculate the exitance
M = Φ/A [W/m2 ], where A is the radiating surface area. Because radi-
ance has units [W/(m2 ·sr)], it may appear tempting to divide the exitance
by the total hemispherical solid angle to determine the radiance.
The correct relationship between exitance and radiance for a Lamber-
tian source is derived by starting with Equation (2.25):
L dA0 cos θ0 dA1 cos θ1
d2 Φ = . (2.34)
R201
42 Chapter 2

This equation describes the flux flowing between two small areas. The
source radiates power in all directions, and in order to determine the
power radiated from the source into the hemisphere, one must integrate
the receiver area dA1 cos θ1 over the whole hemisphere. In order to per-
form the integration, construct a hemispherical dome with radius r and
its center at the elemental source area dA0 , as shown in Figure 2.7. The
projected receiver area can be written as

dA1 cos θ1 = r dθ0 × h dα. (2.35)

Assume a constant radiance over the small elemental source area and
integrate over the complete hemisphere to obtain
 2π  π
2 L dA0 cos θ0 r2 dθ0 sin θ0 dα
ΦH = (2.36)
0 0 r2
 2π  π
2
= LA0 cos θ0 sin θ0 dθ0 dα
0 0
 π
2
= LA0 2π cos θ0 sin θ0 dθ0
0
= LA0 π, (2.37)

then
Φ
M= = Lπ. (2.38)
A0
This result indicates that the exitance of a Lambertian radiator is related
to radiance by the projected solid angle [Equation (2.12)] of π sr, not the
geometric solid angle of 2π sr. Why? In Equation (2.36) there is a cos θ0
term describing the projected area of the source. A flat Lambertian source
therefore radiates with a cosine distribution and not isotropically in all direc-
tions. The projected solid angle is effectively calculated by weighting the
projected area of the source with the cos θ0 term; see Section 2.5.3.
The above derivation indicates that one should always use the projected solid
angle instead of the geometric solid angle when dealing with Lambertian sources.
In the event that the solid angle under consideration is less than the full
hemisphere, use the equation presented in Section 2.5.3. Rather than mem-
orizing rules, it is better to perform the calculation from first principles —
it is easier to remember Equation (2.25) than to remember a multitude of
rules.
Consider a Lambertian source as shown in Figure 2.14(a). As the area
dA is rotated, the projected area along the line to the observer decreases
as dA cos θ. For a small source with area dA, when viewed at an arbitrary
angle θ, the intensity varies as I = L dA cos θ. However, in Figure 2.14(b),
Introduction to Radiometry 43

θ θ dA
dA cosθ S

(a) (b)
Figure 2.14 Lambertian sources: (a) finite-size tilted surface, and (b) infinite-size tilted sur-
face behind a finite size aperture.

the area S is infinitely large compared with the window opening dA. In
this case the observer always sees an area dA, independent of the angle
by which the source is rotated. In the case of Figure 2.14(b), the observed
intensity is given by I = L dA.

2.8 Spatial View Factor or Configuration Factor

It is evident from Equation (2.25) that the amount of flux transfer between
two surfaces dA0 and dA1 has a geometrical term and a radiometric term.
Equation (2.30) shows that the geometry can be calculated over areas A0
and A1 . A flux transfer calculation is therefore as much a geometrical cal-
culation as it is a radiance calculation. On condition that (1) the source
spatial radiance is uniform, (2) there are no medium losses, and (3) the
receiving spatial area is uniform, the radiance term can be mathemati-
cally separated from the purely geometrical term. The radiometric term
is a function of the field only (irrespective of space), whereas the geomet-
ric term is spatial geometry only (irrespective of radiance field consider-
ations). The calculation of view factors can be seen as a form of spatial
normalization.
The efficient transfer of heat is of prime importance when designing
furnaces. The heat-transfer community developed a detailed mathematical
concept for spatial geometric integrals, called the configuration factor, view
factor, diffuse shape factor, or angle factor. 23 Tables of configuration factor
values are precalculated for given geometrical configurations.
Assuming diffuse Lambertian surfaces for A0 and A1 , the view factor
is the portion of all of the flux (in the hemisphere, hs) leaving A0 that
passes through A1 and is given by
 
A L0 cos θ0 dA0 cos θ1 dA1 /R01
2
FA0 → A1 =  0 1 
A

A0 L0 dA0 ( hs cos θ0 cos θ1 dA1 /R01 )


2
44 Chapter 2
 
L0 cos θ0 dA0 cos θ1 dA1 /R201
= A0 A1
 , (2.39)
π A0 L0 dA0

where hs cos θ0 cos θ1 dA1 /R201 = π sr is the solid angle integral yielding
the projected solid angle of the hemisphere centered at dA0 . The view
factor has units of [sr/sr]. In Equation (2.39) the radiance field L is allowed
to vary across the area A0 , but if it is constant L( A0 ) = c, it follows that
the view factor for two small areas dA0 and dA1 is given by 23
cos θ0 cos θ1 dA1
FdA0 →dA1 = . (2.40)
πR201
With the view factor known (probably from precalculated tables), the flux
transfer from surface A0 to A1 can be determined by
Φ A0 → A1 = πL0 A0 FA0 → A1 . (2.41)

2.9 Shape of the Radiator

This section shows that the three-dimensional surface profile of a Lam-


bertian radiator does not affect the ratio of radiance to exitance. In this
case only a disk and a sphere is considered, but in principle, two radiators
emitting the same flux but with different three-dimensional profiles will
have the same ratio provided that the projected areas of the two shapes are
the same.

2.9.1 A disk

Consider a disk uniformly radiating a total flux of Φ into the hemisphere.


From the definition of radiance and exitance, it follows that
Φ
L=  (2.42)
dA0 cos θ0 dΩ1
and
Φ
M=  . (2.43)
dA0
Hence,

M dA0 cos θ0 dΩ1
= 
L dA0

A0 cos θ0 dΩ1
=
A0

= cos θ0 dΩ1 , (2.44)
Introduction to Radiometry 45

where θ0 is the angle of dΩ1 with respect to the disk normal vector. In
the construction of Figure 2.7, dΩ1 = d2 P /r2 = dθ sin θdα. Integrate over
0 ≤ θ ≤ π/2 sr and 0 ≤ α ≤ 2π sr to obtain
 2π  π/2
M
= cos θdθ sin θdα
L 0 0
 π/2
= 2π sin θ cos θdθ
0
= π sin (π/2) = π,
2
(2.45)
which is just another way to derive Equation (2.38).

2.9.2 A sphere

Consider a sphere with radius r uniformly radiating a total flux of Φ into


a full sphere. This amount of flux is equally divided between the visible
side of the sphere and the ‘dark side’ of the sphere. The radiance must
be calculated only from the visible side of the sphere. Again using the
definitions of radiance and exitance, it follows that
 
M dA0 cos θ0 dΩ1
= 
L dA0
2

2π r dΩ1
= 2
4π r

1
= dΩ1 . (2.46)
2
In the construction of Figure 2.7, dΩ1 = d2 P /r2 = dθ sin θdα. Integrate
over 0 ≤ θ ≤ π/2 sr (only a hemisphere is visible to contribute to radiance)
and 0 ≤ α ≤ 2π sr to obtain
 
M 1 2π π/2
= dθ sin θdα
L 2 0 0
 π/2
= π sin θdθ
0
= 2π sin2 (π/4) = π. (2.47)
Considering that at any moment only half of a sphere is visible, this agrees
with the ratio of radiance to exitance derived in the previous section.

2.10 Photometry and Color

2.10.1 Photometry units

Photometric quantities are not specified in spectral variables (e.g., Planck’s


law) but rather as normalized variables in terms of a standard source and
46 Chapter 2

a standard eye spectral response. The standard source is a blackbody radi-


ator at a temperature of 2042 K (the solidifying point of platinum), with a
luminance of 6 × 105 candela/m2 . The standard eye response is described
in more detail in the next section. Luminous intensity is expressed in can-
dela.

2.10.2 Eye spectral response

The normalized spectral response of the eye is called the relative luminous
efficiency. The exact shape of the relative luminous efficiency depends on
the light level. The two extremes of relative luminous efficiency are known
as photopic (high luminance levels) and scotopic (low luminance levels)
luminous efficiencies. Unless otherwise specified, photometric values are
normally specified for the photopic spectral response.
If the luminance exceeds about 3 lm/(m2 ·sr), the eye is light-adapted,
and the cones in the retina are operating. Under light-adapted conditions,
the eye’s spectral response is called photopic. In photopic vision, the eye
has color discrimination and acute foveal vision. The standard photopic
luminous efficiency is shown in Figure 2.15 and Table A.4. The spectral
shape of photopic luminous efficiency is defined in tabular form 24,25 but
can be roughly approximated by

Vλ = 1.019 exp(−285.51(λ − 0.5591)2 ). (2.48)

If the luminance is less than 3×10−5 lm/(m2 ·sr), the eye is dark-
adapted. The cones are no longer sensitive, and the rods sense the light.
Under dark-adapted conditions, the eye’s spectral response is called sco-
topic. Under scotopic vision, the eye is not sensitive to color and has no
foveal vision. The standard scotopic luminous efficiency is shown in Fig-
ure 2.15 and Table A.4. The spectral shape of scotopic luminous efficiency
is defined in tabular form 24,25 but can be roughly approximated by

Vλ = 0.99234 exp (−321.1(λ − 0.5028)2 ). (2.49)

Equations (2.48) and (2.49) are not accurate at the extreme wavelength
limits of the spectral bands. These approximations should be used with
care if the source has significant amounts of flux at the wavelength lim-
its. An example of one such case is the eye viewing a blackbody at low
temperatures.
At luminance levels between 3×10−5 lm/(m2 ·sr) and 3 lm/(m2 ·sr),
the spectral response is somewhere between the photopic and scotopic,
referred to as mesopic vision.
Introduction to Radiometry 47

1
Photopic
Scotopic
0.8
Relative efficiency

0.6

0.4

0.2

0
350 400 450 500 550 600 650 700 750
Wavelength [mm]

Figure 2.15 Relative luminous efficiency Vλ of the human eye.

2.10.3 Conversion to photometric units

The conversion from radiometric units (watts) to photometric units (lu-


mens) is easily performed if the flux’s spectral properties are known. This
section considers radiance and luminance, but the reader should note that
any radiometric quantity can be used, e.g., flux, or irradiance.
The photopic luminance of a source is defined as
 ∞
Lν = Kmax Vλ Leλ dλ, (2.50)
0

where Kλ = Kmax Vλ is the spectral photopic efficacy, Vλ is the photopic


efficiency, Kmax = 683 lm/W is the maximum value of photopic efficacy,
referenced to a 2042-K blackbody standard source, and Leλ is the source’s
radiance.
Likewise, the scotopic luminance of a source is defined as
 ∞

Lν = Kmax Vλ Leλ dλ, (2.51)
0

where Kλ = Kmax  Vλ is the spectral scotopic efficacy, Vλ is the scotopic

efficiency, Kmax = 1700 lm/W is the maximum value of scotopic efficacy
referenced to a 2042-K blackbody standard source, and Leλ is the source’s
radiance.

Literature sometimes shows different values for Kmax and Kmax be-
cause the values changed every time a new standard source was instituted
or a modification was made to the efficiency curves.
Efficacy can be defined as a spectral variable, as in the previous equa-
tions, or over a wide spectral band. The total (wideband) luminous efficacy
48 Chapter 2

is given by the ratio of total luminance to total radiance:



K = (2.52)
L
 e∞
Kmax Vλ Leλ dλ
= 0 ∞ (2.53)
0 Leλ dλ
∞
Kmax 0 Vλ Leλ dλ
= ∞ . (2.54)
0
Leλ dλ

2.10.4 Brief introduction to color coordinates

There is a multitude of color-space definitions, each optimized for differ-


ent applications. The single example provided here only serves to illustrate
the key concept that the calculation of color coordinates is essentially a ra-
diometric calculation involving normalization with given spectral weights.
See Section 7.2 for more examples of normalization.
One common color space is the CIE 1931 tristimulus values XYZ, or
the xyY chromaticity color space. 24,26–28 To calculate the xyY color coordi-
nates of a color given the spectrum, proceed as follows:
 ∞
X (T ) = x̄λ Lλ dλ, (2.55)
0
 ∞
Y (T ) = ȳλ Lλ dλ, and (2.56)
0 ∞
Z(T ) = z̄λ Lλ dλ, (2.57)
0

where x̄λ , ȳλ , and z̄λ are the color-matching functions of the CIE standard
colorimetric observer, as shown in Figure 2.16. The xyz chromaticity color
coordinates can then be calculated by
X
x = , (2.58)
X+Y+Z
Y
y = , and (2.59)
X+Y+Z
Z
z = = 1 − x − y, (2.60)
X+Y+Z
where x and y define the color coordinates in the xy chromaticity diagram
(Figure A.1). Valid color coordinates are all inside the closed curve. The
U- or dome-shaped part of the perimeter describes the monochromatic
rainbow colors, calculated from the above equations with Lλ = 1. The
color of a Planck radiator (see Section 3.1) can be calculated from the above
Introduction to Radiometry 49

2
1.8
1.6 z (blue)
Tristimilus response

1.4
1.2 y (green)
1 x (red)
0.8
0.6
0.4
0.2
0
0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75
Wavelength [mm]

Figure 2.16 CIE standard observer color-matching functions.

equations and Equation (3.1), with Lλ = Meλ ( T )/π. This is known as


the Planckian locus. 27 The coordinates of a few selected monochromatic
wavelengths and the Planckian locus are shown in Figure A.1.
The intrinsic color of a surface is not necessarily the color observed in
reflected light; the color coordinates of reflected light are affected by the
spectral illumination onto the surface. Although there may be a surface
with a perfect white color (reflecting all light equally), there is no perfect
white illuminating source. 29 Hence, for different illuminating sources, the
color white has many different color coordinate representations, known
as ‘white points’. 30,31 It was noted 29 that Planck radiators appear almost
equally white at all temperatures above 2000 K.

2.10.5 Color-coordinate sensitivity to source spectrum

Color is an elusive property — different people perceive color differently,


and the apparent color of an object depends on the illuminance spectrum.
This section explores these subtleties as an application of normalizing and
radiometric concepts rather than the human perception of color. The data
for this analysis are available in the pyradi 25 data set.
Four sources are considered: the first light source is a ‘daylight’ flu-
orescent light source, the second source is the sun modeled as a thermal
radiator at 5900 K, the third source is an incandescent light globe at a tem-
perature of 2850 K, and the fourth source is a low-pressure sodium lamp.
The sources’ normalized radiances are shown in Figure 2.17.
The samples illuminated by the sources are a red tomato, lettuce, a yel-
low prune, a dark-green leaf, a blue Nitrile (latex-like) surgical glove, and
standard white printing paper. Figure 2.18 shows the spectral reflectance
50 Chapter 2

1
Sunlight (5900 K)
Normalized radiance

Incandescent (2850 K)
0.5
Sodium
Fluorescent

0
0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75
Wavelength [mm]

Figure 2.17 Normalized source radiance.

1
White
paper
Sample reflectance

0.8
Lettuce
Blue
0.6 Nitrile

0.4 Tomato

0.2
Yellow prune Green leaf
0
0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75
Wavelength [mm]

Figure 2.18 Reflectance of selected objects in the visual band.

0.9 Tomato 0.5


Lettuce 0.58 mm
0.8 Yellow prune Inc
Blue Nitrile
0.7 Green leaf
Inc
White paper Fl
0.6 0.45 Na
0.50 mm Fl Inc
0.5 Inc 0.58 mm Sun
Na Sun 0.59 mm
0.4
y
y

Inc Inc 0.60 mm


Fl Inc
0.3 Sun 0.4
Fl
Sun 0.70 mm Fl
0.2 Fl - fluorescent Sun 0.60
0.48 mm Sun - sun, 5900 K
0.1 Inc - lamp, 2850 K
Inc
Na - sodium lamp 0.35
0
0 0.2 0.4 0.6 0.8 0.35 0.4 0.45 0.5 0.55 0.6 0.65
x x
Figure 2.19 Sample color coordinates under different illumination sources.
Introduction to Radiometry 51

of the samples. These diffuse reflection spectra were measured with a


spectroradiometer, illuminating the sample with a bright light at short dis-
tance. The fruit samples all demonstrated considerable light propagation
deeper into the fruit. The blue glove was located on top of a Spectralon
white reference (note the considerable ‘white’ reflectance beyond 0.55 µm).
The color coordinates of the samples, when illuminated with the var-
ious sources, were calculated using Equations (2.57) and (2.60), using the
code in Section D.5.4. The results are shown in Figure 2.19. It is evident
that the fluorescent source is a reasonable match to daylight illumination;
however, the spectral peaks in the fluorescent radiance does result in small
shifts of the color coordinates relative to the sun as reference. The samples’
color coordinates shifted considerably toward the orange–yellow under in-
candescent illumination. Finally, when illuminated by the sodium lamp,
all samples had virtually the same color, 589 nm — the wavelength of the
near-monochromatic illumination.

Bibliography
[1] Wikipedia, “Electromagnetic radiation,” http://en.wikipedia.org/
wiki/Electromagnetic_radiation.

[2] Wikipedia, “Photon,” http://en.wikipedia.org/wiki/Photon.

[3] Casagrande, M., “Birth of a Photon,” http://casagrandetext.


blogspot.com/2011/01/birth-of-photon.html.

[4] Wolfe, W. L. and Zissis, G., The Infrared Handbook, Office of Naval
Research, US Navy, Infrared Information and Analysis Center, Envi-
ronmental Research Institute of Michigan (1978).

[5] Pinson, L. J., Electro-Optics, John Wiley & Sons, New York (1985) [doi:
ISBN 0-471-88142-2].

[6] Wyatt, C. L., Radiometric Calibration: Theory and Methods, Academic


Press, New York (1978).

[7] Boyd, R. W., Radiometry and the Detection of Optical Radiation , John
Wiley & Sons, New York (1983).

[8] Bayley, F. J., Owen, J. M., and Turner, A. B., Heat Transfer, Nelson
Publishers, London (1972).

[9] Nicodemus, F. E., NIST Self-Study Manual on Optical Radiation Measure-


ments, NIST, Washington, D.C. (1976).
52 Chapter 2

[10] Wyatt, C. L., Radiometric System Design , Macmillan Publishing Com-


pany, New York (1987).

[11] “International Lighting Vocabulary,” Tech. Rep. CIE No. 17 (E-1.1),


Bureau Centrale de la CIE (1970).

[12] “International Lighting Vocabulary,” Tech. Rep. 17.4-1987, Bureau


Centrale de la CIE, Bureau Centrale de la CIE (1987).

[13] “American National Standard Nomenclature and Definitions for illu-


minating Engineering,” Tech. Rep. ANSI Z7.1-1967, Illuminating En-
gineering Society (1967).

[14] “American National Standard Nomenclature and Definitions for illu-


minating Engineering,” Tech. Rep. ANSI/IES RP-16-1980, Illuminat-
ing Engineering Society (1981).

[15] Hudson, R. D., Infrared System Engineering, Wiley-Interscience, New


York (1969).

[16] RCA Corporation, RCA Electro-Optics Handbook, no. 11 in EOH, Burle


(1974).

[17] Edwards, I., “The nomenclature of Radiometry and Photometry,”


Laser & Optronics Magazine 8(8), 37–42 (August 1989).

[18] Roberts, D. A., “Radiometry & Photometry: Lab Notes on Units,”


Photonics Spectra Magazine 4, 59–63 (April 1987).

[19] Dresselhaus, M. S., “Solid State Physics (Four Parts),” http://web.


mit.edu/afs/athena/course/6/6.732/www/texts.html.

[20] Palmer, J. M. and Grant, B. G., The Art of Radiometry, SPIE Press,
Bellingham, WA (2009) [doi: 10.1117/3.798237].

[21] Accetta, J. S. and Shumaker, D. L., Eds., The Infrared and Electro-Optical
Systems Handbook (8 Volumes), ERIM and SPIE Press, Bellingham, WA
(1993).

[22] Palik, E. D., Ed., Handbook of Optical Constants of Solids , Academic


Press, San Diego, CA (1998).

[23] Modest, M. F., Radiative Heat Transfer, Academic Press, San Diego, CA
(2003).

[24] Colour & Vision Research Laboratory, “Colour and Vision Database,”
http://www.cvrl.org/index.htm.
Introduction to Radiometry 53

[25] Pyradi team, “Pyradi data,” https://code.google.com/p/pyradi/


source/browse.

[26] Schanda, J., Ed., Colorimetry: Understanding the CIE System, Wiley-
Interscience, New York (2007).

[27] Wikipedia, “Planckian locus,” http://en.wikipedia.org/wiki/


Planckian_locus.

[28] Wikipedia, “CIE 1931 color space,”


http://en.wikipedia.org/wiki/CIE_1931_color_space.

[29] Kirk, R., “Standard Colour Spaces,” Technical Note FL-TL-TN-0139-


StdColourSpaces, FilmLight Digial Film Technology (2007).

[30] Wikipedia, “Standard illuminant,”


http://en.wikipedia.org/wiki/Standard_illuminant.

[31] Wikipedia, “White point,” http://en.wikipedia.org/wiki/White_


point.

Problems

2.1 When light interacts with an optical medium on a macroscopic


level, there are three potential effects. Describe the three effects
that a medium can have on a beam of light and show the mathe-
matical relationship between these effects. [2]

2.2 Given the two equations 4π sin2 2θ and π sin2 θ, explain what they
mean and which is which. [2]
Calculate the geometric solid angle and projected solid angle for
the following half-apex angles: θ ∈ {0, 0.01, 0.1, 1, π/2} rad. Com-
pare these values in a table and explain why they are different.
[2]
2.3 A room has floor dimensions of 5 m by 5 m. The roof is 3.5 m
above the floor. There is a sensor mounted exactly in the center of
the roof. Calculate both the geometric and projected solid angles
of the floor as seen from the location of the sensor. Draw a picture
of the room, showing all of the details. Derive a mathematical de-
scription of the solid angle. Calculate the solid angle numerically
(not analytically). [10]
2.4 Starting from first principles, derive the solid angle through the
optical aperture (opening) of a Cassegrain telescope, as seen from
54 Chapter 2

the detector in the focal plane. Provide all steps of all mathemati-
cal derivations. The telescope is as follows:

dp

ds
Detector

Secondary Primary
mirror mirror

2.5 Describe, in graphical and mathematical terms, how the flux trans-
fer between two arbitrary objects can be calculated. [4]
2.6 A circular disk and sphere of radii r are located at a distance R
from an infinitely large wall, with uniform radiance L. The normal
vector from the disk is normal to the wall. Derive mathematical
formulations that describe the flux flowing from the wall through
the disk and sphere. [6]

dA’1 Disk
dA0 r
θ0
r
dA1
R/cos θ0
dA’1 Sphere
R

2.7 Show mathematically that the irradiance from a Lambertian ra-


diating sphere with radius r is the same as the irradiance of a
perpendicularly viewed, Lambertian, radiating circular disk of the
same radius (viewed from the same distance) when the distance
approaches infinity R → ∞. To assist in the answer, construct a
‘measurement’ sphere around the receiving area and consider the
projected areas onto this ‘measurement’ sphere from both the disk
and the radiating sphere. [8]
Explain why the sun and moon seem to have a uniform radiance
when viewed from earth, even though these bodies are (approxi-
mately) spherical. [2]
Perform an Internet search and confirm whether the above state-
Introduction to Radiometry 55

ment is true. Support your conclusion with your research findings.


[2]
2.8 A sphere with a radius r is a distance R from an infinitely large
wall, with uniform radiance L. Compute the flux transfer at a
given wavelength λ between the sphere and the wall. Do not do
the calculation numerically; only show the solution in mathemati-
cal terms. [4]
Extend the solution above and show how the flux transferred in
the spectral band λ1 to λ2 can be calculated if the medium between
the sphere and wall has a transmittance of τλ . [2]
2.9 Considering the results in Figure 2.19, the color value (darkness)
of the sample does not seem to affect the color coordinates.

2.9.1 Although not shown in the graph, it could well be that a dark-
green leaf has the same color coordinates as an apparently lighter-
green leaf. Consider the concept that artists call the ‘value’ of a
color. To what extent are the color coordinates affected by the
value of the color? [2]
2.9.2 Review Equations (2.57) and (2.60) and consider the effect of ab-
solute irradiance on a sample on the color coordinate of that sam-
ple. Do the color coordinates change with respect to illumination
level? Support your answer with proper mathematical derivation.
To what extent will light leakage into the sample affect the mea-
surement of absolute spectral reflectance? What effect will this
have on the color coordinates of the sample? [4]

2.10 A cylindrical object has a diameter-to-length aspect ratio of 1:10.


Derive a mathematical formulation of the object’s geometrical solid
angle as function of aspect angle, in the plane containing the rota-
tional centerline of the object. You may assume the object distance
to be more than 1000 times the cylinder’s diameter. Calculate and
plot the solid angle at 5-deg intervals around the object. [10]
2.11 Derive a mathematical formulation for the solid angle of a sphere
with radius r as seen from a distance d measured from the sphere’s
center. Plot the sphere’s solid angle for this distance d ranging
from 0 to 10r. [3]
Chapter 3
Sources

There could be no fairer destiny for any physical theory


than that it should point the way to a more comprehensive theory
in which it lives on as a limiting case.
Albert Einstein

3.1 Planck Radiators

The Planck radiation law is derived in detail in several references. 1–3 The
discussion presented here aims to convey key insights, rather than rigorous
mathematical terminology.
Physical matter (atoms and molecules) at nonzero absolute tempera-
ture (T = 0 K) emits and absorbs electromagnetic radiation. Under ther-
modynamic equilibrium, the incident electromagnetic field and the atoms
are in continual energy exchange, mutually sustaining each other’s energy
state by photon emission and absorption. If either the electromagnetic field
intensity or the object’s temperature should change, the energy exchange
will adjust until thermodynamic equilibrium is re-established.
Perfect thermodynamic equilibrium could exist in an enclosed cavity
with the homogenous walls at uniform temperature, such as inside a hol-
low ball [shown in Figure 3.1(a)]. As shown in the figure, there are two
sources of energy: the radiation field [indicated by (1)] and the cavity walls
[indicated by (2)]. At any single frequency ν the electromagnetic radiation
field is sustained by the wall photon emission [indicated by (3)] at a single
energy transition hν at atomic level. Likewise, atoms radiate photons at a
specific frequency corresponding with the energy transition in the atom,
hν. The wall’s kinetic temperature T is sustained by absorption of photons
from the electromagnetic field [indicated by (4)].
Thermodynamic equilibrium means that there is zero energy flow Q̇
and zero mass flow ṁ across the system’s boundary [indicated by (5)].
Hence, the only energy exchange is between the radiation field and the

57
58 Chapter 3

(2)
=1
T
T1
(4) L2
L1
T (1) T1 T2 Q
(3) hν
L1

.
. m=0
. .
. m=0
Q=0 Q=0 Q>0
(5)
(a) (b) (c)

Figure 3.1 Concepts behind Planck’s law and thermal radiation: (a) closed cavity, (b) initial
non-equilibrium, and (c) permanent non-equilibrium.

cavity walls. Einstein showed 4 that the principle of detailed balance re-
quires that the processes of spontaneous emission, stimulated emission,
and photo-absorption [indicated by (3) and (4)] are in equilibrium. These
three processes ensure equilibrium at a single frequency for a given photon
but also collectively for all photons at all frequencies.
The permitted energy at each radiative and absorptive frequency must
be multiples of the photon’s energy hν, produced by the transition between
two energy states in the atoms or molecules. The relative numbers of the
allowed energy transitions are, in turn, related by the Boltzmann proba-
bility distribution p(n) = (1 − exp(− x)) exp(−nx), where x = hν/(kT ),
which is a function of temperature T. The Boltzmann probability distri-
bution provides the link between the wall temperature T and the photon
frequency (spectral) distribution.
The final concept in the thermal radiator discussion involves the reso-
nance modes (standing waves) supported in the cavity, also known as the
density of states. Density of states is also used in the derivation of the
electronic wave function in a crystal (Section 5.5.3) and thermal noise (Sec-
tion 5.3.2). Density of states is beyond the scope of the current discussion.
For more details see complete derivations. 1–3
Planck’s law applies to transition energy levels compliant with the
Boltzmann probability distribution and thus does not apply to lasers, LEDs,
fluorescence, or radioactivity (transition levels not compliant with Boltz-
mann probability distribution). Planck-law radiation is isotropic, spatially
homogeneous, unpolarized, and incoherent.
Sources 59

The derivation of Planck’s law required an enclosed cavity to retain


equilibrium between the electromagnetic wave and the energy in the cav-
ity wall. If a closed system starts from disequilibrium, it will reach equi-
librium by exchange of electromagnetic radiation. Consider Figure 3.1(b)
with two infinitely large plates at initial temperatures T1 > T2 . The two
plates initially emit radiance L1 > L2 . If there is no energy change or mass
change in the system, there is a net energy flow only from plate 1 to plate
2 until T1 = T2 and L1 = L2 , when equilibrium is reached.
Most real-world sources are neither enclosed cavities with uniform
temperature nor closed systems with zero mass and energy flow. Fig-
ure 3.1(c) shows a plate radiating with radiance L1 while the plate tem-
perature is maintained at temperature T1 by an external heat source. The
energy supplied by the heat source balances the heat lost by radiation,
maintaining constant temperature. This is an example of a laboratory in-
strument called a blackbody simulator. If the surface emissivity (see Sec-
tion 3.2) is unity, the surface radiates with a radiance equal to the ideal
Planck-law radiator. In this case thermodynamic equilibrium is not re-
quired because the heat source maintains the surface temperature at the
appropriate temperature. High-performance laboratory blackbody simu-
lators also achieve high emissivity, as described in Section 3.2.5.
Under thermodynamic equilibrium, Plank’s law sets the upper limit
for an object’s radiation for a Boltzmann probability distribution compliant
radiator. A Planck-law radiator has an emissivity (see Section 3.2) of unity
and is known as a blackbody. Although the blackbody is a theoretical
concept, it forms the basis of a surprisingly good model for many real-
world objects.
Planck’s law can be written in several forms, the spectral variable
can be wavelength λ (with units [m] or [µm]), wavenumber ν̃ (with units
[cm−1 ]), or optical frequency ν (with units [Hz]); and the exitance can
be expressed in radiant (watts) or photon (quanta) terms. 5 Planck’s law
states that exitance varies with temperature as calculated at a given spec-
tral value. The spectral value is a ‘parameter’ of Planck’s law, and the
absolute or kinetic temperature of the object is the only free variable in the
law.
In the description of Planck’s law, it is convenient to define a variable
x = hν/(kT ) = hc/(λkT ) with SI units of [W/(m3 ·K)], where two of the
distance dimensions ([m2 ]) relate to area, and one distance dimension ([m])
relates to wavelength. It is also convenient to define a set of radiation con-
stants c1 = 2πhc2 and c2 = hc/k. The constant c1 has different numerical
values depending on the chosen radiation units and the spectral domain
variable. These and other physical constants are given in Table A.2.
60 Chapter 3

3.1.1 Planck’s radiation law

3.1.1.1 Planck’s law in terms of wavelength

The spectral radiant exitance of a blackbody as a function of wavelength is


given by 5

2πhc2 c1eλ
Meλ ( T ) =
=
, (3.1)
λ5 ehc/(λkT ) −1 λ5 e /(λT )
c 2λ −1
where T is temperature in [K], and λ is wavelength in [m]. Exitance Meλ ( T )
is in units of [W/m3 ] or [W/(m2 ·µm)] (depending on the value of c1eλ ).
This is a spectral exitance in watts per square meter, per wavelength inter-
val. The values of c1eλ and c2λ are given in Table A.2.
The derivative with respect to temperature of spectral radiant exi-
tance, for a given temperature T, as a function of wavelength λ, with
x = c2λ /(λT ), is given by

dMeλ ( T ) 2πhc2 xex c1eλ xex


= = , (3.2)
dT λ 5 T ( e x − 1) 2 λ 5 T ( e x − 1) 2

with units of [W/(m3 ·K)], or [W/(m2 ·µm·K)] (depending on the value of


c1eλ ). This is a change in spectral exitance in watts per square meter, per
wavelength interval, with temperature.
The spectral photon rate exitance at a temperature T as a function of
wavelength λ, with x = c2λ /(λT ), is given by
2πc c1qλ
Mqλ ( T ) =
=
, (3.3)
λ4 ehc/(λkT ) −1 λ4 e /(λT )
c 2λ −1

with exitance in units of [q/(s·m3 )] or [q/(s·m2 ·µm)] (depending on the


value of c1qλ ). This is a spectral exitance in photons per second, per square
meter, per wavelength interval.
The derivative with respect to temperature of spectral photon rate
exitance, for a given temperature T, as a function of wavelength λ, with
xc2λ /(λT ), is given by
dMqλ ( T ) 2πcxex c1qλ xex
= = , (3.4)
dT Tλ4 (ex − 1)2 Tλ4 (ex − 1)2

with units of [q/(s·m3 ·K)] or [q/(s·m2 ·µm·K)] (depending on the value


of c1qλ ). This is a change in spectral exitance in photons per second, per
square meter, per wavelength interval, with temperature.
Sources 61

3.1.1.2 Planck’s law in terms of wavenumber

The spectral radiant exitance as a function of wavenumber ν̃ of a blackbody


at temperature T is given by

2πhc2 ν̃3 c ν̃ ν̃3


Meν̃ ( T ) =
= c 1e , (3.5)
echν̃/(kT ) − 1 (e 2ν̃ ν̃/T − 1)

with units [W/(m2 ·m−1 )] or [W/(m2 ·cm−1 )] (depending on the value of


c1eν̃ ). This is a spectral exitance in watts per square meter, per wavenumber
interval.
The derivative with respect to temperature of spectral radiant exi-
tance, for a given temperature T, as a function of wavenumber ν̃, with
x = c2ν̃ ν̃/T, is given by

dMeν̃ ( T ) 2πhc2 ν̃3 xex c1eν̃ ν̃3 xex


= = , (3.6)
dT T ( e x − 1) 2 T ( e x − 1) 2

with units [W/(m2 ·m−1 ·K)] or [W/(m2 ·cm−1 ·K)] (depending on the value
of c1eν̃ ). This is a change in spectral exitance in watts, per square meter,
per wavenumber interval, with temperature.
The spectral photon exitance as a function of wavenumber ν of a black-
body at temperature T is given by

2πcν̃2 c1qν̃ ν̃2


Mqν̃ ( T ) =
= c ν̃/T , (3.7)
echν̃/(kT ) − 1 (e 2ν̃ − 1)

with units [q/(s·m2 ·m−1 )] or [q/(s·m2 ·cm−1 )] (depending on the value of


c1qν̃ ). This is a spectral exitance in photons per second, per square meter,
per wavenumber interval.
The derivative with respect to temperature of spectral photon exitance,
for a given temperature T, as a function of wavenumber ν̃, with x = c2ν̃ ν̃/T
is given by

dMqν̃ ( T ) 2πcν̃2 xex c1qν̃ ν̃2 xex


= = , (3.8)
dT T ( e x − 1) 2 T ( e x − 1) 2

with units [q/(s·m2 ·m−1 ·K)] or [q/(s·m2 ·cm−1 ·K)] (depending on the value
of c1qν̃ ). This is a change in spectral exitance in photons per second, per
square meter, per wavenumber interval, with temperature.
62 Chapter 3

3.1.1.3 Planck’s law in terms of frequency

The spectral radiant exitance as a function of frequency ν of a blackbody


at temperature T is given by

2πhν3 c ν3
Meν ( T ) = hν = c 1eν , (3.9)
c2 e kT − 1 (e 2ν ν/T − 1)

with units [W/(m2 ·Hz)]. This is a spectral exitance in watts per square
meter, per frequency interval.
The derivative with respect to temperature of radiant exitance, for a
given temperature T, as a function of frequency ν, with x = c2ν ν/T, is
given by

dMeν ( T ) 2πhν3 xex c1eν ν3 xex


= hν 2 = , (3.10)
dT T ( e x − 1) 2
c T e kT − 1
2

with units [W/(m2 ·Hz·K)]. This is a change in spectral exitance in watts


per square meter, per frequency interval, with temperature.
The spectral photon rate exitance as a function of frequency ν of a
blackbody at temperature T is given by

2πν2 c1qν ν2
Mqν ( T ) = = , (3.11)

c2 e kT − 1 (ec2ν ν/T − 1)

with units [q/(s·m2 ·Hz)]. This is a spectral exitance in photons per second,
per square meter, per frequency interval.
The derivative with respect to temperature of spectral photon exitance,
for a given temperature T, as a function of frequency ν, with x = c2ν ν/T,
is given by

dMqν ( T ) 2πν2 xex c1qν ν2 xex


= = , (3.12)
dT c 2 T ( e x − 1) 2 T ( e x − 1) 2
with units [q/(s·m2 ·Hz·K)]. This is a change in spectral exitance in photons
per second, per square meter, per frequency interval, with temperature.

3.1.2 Wien’s displacement law

From Figure 3.2 it is clear that the Planck-law radiation curve has only one
maximum. The equation relating the blackbody temperature and the spec-
tral value at the peak exitance is known as Wien’s displacement law. The
Sources 63

spectral value (wavelength, frequency, wavenumber) where the maximum


exitance occurs is obtained by differentiating Planck’s law with respect to
the spectral variable, equating it to zero, and solving for the spectral vari-
able. 6 Note that the different maxima do not coincide.
The relationship between the blackbody temperature in [K] and the
spectral variable (λme in [µm], ν̃me in [cm−1 ], or νme in [Hz]) at which max-
imum radiant exitance occurs is given by
106 hc w
λme = = eλ , (3.13)
a5 kT T

a3 kT
ν̃me = = weν̃ T, (3.14)
100hc
and
a3 kT
νme = = weν T. (3.15)
h

The relationship between the blackbody temperature in [K] and the


spectral variable (λmq in [µm], ν̃mq in [cm−1 ], or νmq in [Hz]) at which the
maximum photon rate exitance occurs and is given by
106 hc wqλ
λmq = = , (3.16)
a4 kT T

a2 kT
ν̃mq = = wqν̃ T, (3.17)
100hc
and
a2 kT
νmq = = wqν T. (3.18)
h

3.1.3 Stefan–Boltzmann law

If Planck’s law is integrated over all wavelengths, the total radiant exitance
from a blackbody is obtained: 3
2k4 π5 4
Me ( T ) = T = σe T 4 , (3.19)
15c2 h3
with exitance Me ( T ) in [W/m2 ], σe the Stefan–Boltzmann constant in units
of [W/(m2 ·K4 )], and temperature T in [K]. Note that the Stefan–Boltzmann
law does not consider energy balance between incident flux and radiated
flux — it assumes the environment is at 0 K, with no incident flux.
64 Chapter 3

The total photon rate exitance from a blackbody is 6


4ζ (3)k3 3
Mq ( T ) = T = σq T 3 , (3.20)
h3 c 2
with exitance Mq ( T ) in [q/(s·m2 )], σq the Stefan–Boltzmann constant in
[q/(s·m2 ·K3 )], and temperature T in [K].

3.1.4 Summation approximation of Planck’s law

Planck’s law can be written in the form of an infinite sum. 5 Consider the
integral of the Planck radiation law as a function of wavelength. Starting
from first principles, where c1 = 2πhc2 , c2 = hc/k, and x = c2 /(λT ),
 λ2
2c2 h dλ
L =
λ1 λ 5 ( e x − 1)
 λ2
c1 dλ
= . (3.21)
λ1 πλ (ex − 1)
5

Change the integration variable λ → c2 /( xT ) and dλ → −(c2 dx)/( Tx2 ):


  
c1 x1 xT 5 c2 dx
L =
π x2 c2 Tx2 ex − 1
 x1 3
c1 T 4 x dx
=
πc42 x2 ex − 1
 x1 ∞
c1 T 4
=
πc42 x2
x3 ∑ e−mx dx
m =1
∞ 
c1 T 4 x1
3 − mx
=
πc42
∑ x e dx, (3.22)
m = 1 x2

where ex − 1 is expanded into the infinite sum through long division. Ap-
plying integral number 2.322.3 in Gradshteyn, 7
  x1
c1 T 4 ∞ −mx x3 3x2 6x 6
L = ∑e
πc42 m=1
− − 2 − 3− 4
m m m m
x2
x2
4 ∞
e − mx

c1 T
= ∑
πc2 m=1 m
4 4
( xm ) 3
+ 3 ( xm ) 2
+ 6xm + 6 .

(3.23)
x1

No approximations are made in the derivation, and the formula is there-


fore exact, provided that enough terms are used in the summation. Be-
cause each value of m results in ten free parameters, it implies that the
number of free parameters increases rapidly for increasing m, limiting the
useful application of Equation (3.23) unless the algorithm is coded on a
computer.
Sources 65

3.1.5 Summary of Planck’s law

The Planck-law function and Planck-law temperature derivative are sum-


marized in Table 3.1 and plotted for several temperatures in Figures 3.2
and 3.3. Note in Figure 3.2 the two important properties of blackbody
radiation: with an increase in temperature, the exitance increases rapidly
and the exitance peak shifts toward shorter wavelengths. Also note the
very large rate of photons emitted, even by an object at 0 ◦ C (273.15 K).
The constants in Tables 3.1 and A.2 were calculated from the Interna-
tional Council for Science: Committee on Data for Science and Technol-
ogy (CODATA) 8 constants as encoded in SciPy. 9 Python code implemen-
tations of the Planck-law equations are available in Section D.4.1 and in
the pyradi 10 toolset.

3.1.6 Thermal radiation from common objects

Figure 3.4 provides a practical view of blackbody radiation — assuming


unity emissivity. The top graph shows the radiance for several common ob-
jects. The bottom graph shows the normalized cumulative radiance for the
same objects, where the cumulative spectral radiance is normalized by the
Stefan–Boltzmann equation. Note that the 50% cumulative radiance value
occurs at a surprisingly longer wavelength than the peak radiance. For
example, it is evident that a long-wave infrared camera captures approx-
imately 30% of the total radiance of a 300-K object even though the peak
radiance occurs at 10 µm. Also shown in the figure are commonly used
spectral band designations. These band designations come from trans-
parent ‘windows’ in the atmosphere (see Section 4.6.4) and the traditional
availability of optical detector materials, providing sensitivity in the var-
ious bands (see Chapter 5). These designations are not well defined or
standardized, and only serve as a rough indication of spectral band. The
acronyms are as follows: near-infrared (NIR, 0.75–1.4 µm), short-wave in-
frared (SWIR, 1.5–2.5 µm), medium-wave infrared (MWIR, 3–5 µm), and
long-wave infrared (LWIR, 8–12 µm).

3.2 Emissivity

This section covers the emissivity concept. The work started here is ex-
panded in Section 6.6, which considers the modeling of thermal radiators.
The Planck radiator (or blackbody) is a very convenient basis for modeling
a large class of sources. However, none of these sources behaves exactly
like a blackbody radiator. In order to use Planck’s law with real sources, it
66 Chapter 3

Table 3.1 Planck’s law summary (constants also in Table A.2).

Wavelength Wavenumber
λ in [µm], T in [K] ν̃ in [cm−1 ], T in [K]
x = c2λ /(λT ) = 14387.8/(λT ) x = c2ν̃ ν̃/T = 1.43878 ν̃/T
c1eλ = 3.74177 × 108 c1eν̃ = 3.74177 × 10−8
c1qλ = 1.88365 × 1027 c1qν̃ = 1.88365 × 1015
Planck’s law
c1eλ c1eν̃ ν̃3
Meλ = Meν̃ =
λ ( e x − 1)
5 ( e x − 1)
in [W/(m2 ·µm)] in [W/(m2 ·cm−1 )]

c1qλ c1qν̃ ν̃2


Mqλ = Mqν̃ =
λ ( e x − 1)
4 ( e x − 1)
in [q/(s·m2 ·µm)] in [q/(s·m2 ·cm−1 )]
Temperature derivative of Planck’s law
dMeλ c1eλ xex dMeν̃ c ν̃3 xex
= = 1eν̃
dT Tλ5 (ex − 1)2 dT T ( e x − 1) 2
in [W/(m2 ·µm·K)] in [W/(m2 ·cm−1 ·K)]

dMqλ c1qλ xex dMqν̃ c1qν̃ ν̃2 xex


= =
dT Tλ4 (ex − 1)2 dT T ( e x − 1) 2
in [q/(s·m2 ·µm·K)] in [q/(s·m2 ·cm−1 ·K)]
Wien’s displacement law
λme = 2897.77212/T ν̃me = T × 1.960998438
λmq = 3669.7031/T ν̃mq = T × 1.107624256
Stefan–Boltzmann law
Me ( T ) = 5.670373 × 10−8 T 4 Mq ( T ) = 1.5204606 × 1015 T 3
in [W/m2 ] in [q/(s·m2 )]
Blackbody photon exitance
67

23 Blackbody photon exitance 10


27
10

Figure 3.2 Planck’s law and Wien’s displacement law for various temperatures.
6000 K
25

Exitance [q/(s×m2×mm)]
22
6000 K 10
Exitance [q/(s×m2×cm -1 )]
10 3000 K
3000 K 1800 K
23
10
21
1800 K
10 1000 K
650 K
1000 K
21 450 K
10
20 650 K 10 300 K
450 K
300 K 200 K
19 19
10 200 K 10
Wien’s law
Wien’s law
18 17
10 10
2 3 4 5 -1 0 1 2
10 10 10 10 10 10 10 10
Wavenumber [cm-1 ] Wavelength [mm]

Blackbody radiant exitance 9


Blackbody radiant exitance
4 10
10
3 6000 K 7 6000 K
10 10

Exitance [W/(m2×mm)]
Exitance [W/(m2×cm-1 )]

2 3000 K 3000 K
10 10
5
1800 K 1800 K
1
10 1000 K 3 1000 K
10 650 K
0 650 K
10 450 K 450 K
1
-1 300 K 10 300 K
10 200 K
200 K
-1
10
-2 10
Wien’s law
Wien’s law
Sources

-3
10
2 3 4 5 -1 0 1 2
10 10 10 10 10 10 10 10
Wavenumber [cm-1] Wavelength [mm]
Blackbody photon rate dM/dT Blackbody photon rate dM/dT
Chapter 3
19 24

Exitance derivative [q/(s×K×m ×cm-1)]


10 10

Exitance derivative [q/(s×K×m 2×mm)]


6000 K

2
6000 K

Figure 3.3 Temperature derivative of Planck’s law for various temperatures.


3000 K
22 3000 K
1800 K 10
18
1800 K
1000 K
10
650 K 1000 K
450 K 20 650 K
10
300 K 450 K
200 K 300 K
200 K
17
10 18
2 3 4 5 10
10 10 10 10 -1 0 1 2
10 10 10 10
Wavenumber [cm-1] Wavelength [mm]

1
Blackbody radiant dM/dT 5
Blackbody radiant dM/dT
Exitance derivative [W/(K×m ×cm )]

10 10
-1

Exitance derivative [W/(K×m ×mm)]


6000 K
4
10
2

0 6000 K
10

2
3
3000 K
3000 K 10
-1 1800 K 1800 K
10 2
10
1000 K
1000 K
-2 650 K 1
10 450 K
10 650 K
300 K 0 450 K
-3 200 K
10
10 300 K
-1
10
200 K
-4
10 -2
2 3 4 5 10
10 10 10 10 -1 0 1 2
10 10 10 10
-1
68

Wavenumber [cm ] Wavelength [mm]


Sources 69

108
6000 K
[W/m2·sr·mm]
surface of
Radiance

104 the sun


1336 K
melting
gold
2856 K
800 K jet
100 tungsten
tailpipe 500 K
lamp 300 K 250 K (–20 °C)
kitchen warm very cold
oven day night
10-4
Ultraviolet Visible Near IR SWIR MWIR LWIR
1
Normalized
Cumulative

6000 K
0.5 surface of 2856 K
the sun 1336 K 800 K jet 500 K 300 K
tungsten tailpipe kitchen
lamp melting warm 250 K
gold oven day (–20 °C)
very cold
night
0.2 0.3 0.4 0.6 0.8 1 2 3 4 5 6 7 8 9 10 20
Wavelength [mm]

Figure 3.4 Summary blackbody curves of common objects.

is convenient to define emissivity  as the degree to which a thermal radiator


approximates a blackbody:
Lobjectλ
λ = . (3.24)
Lbbλ

Emissivity has many guises, to allow for spectral and directional pa-
rameters, that are necessary to describe emissivity in a more general sense.
Table 3.2 provides a short summary of the various definitions of emissiv-
ity. The source radiance is indicated by the subscript s , and the theoretical
blackbody radiance is indicated by the subscript bb . The source and black-
body temperatures are equal, Ts = Tbb . The two variables θ and ϕ denote
directional zenith and azimuth angles, respectively. A much more detailed
description is given in Palmer and Grant. 11 Directional emissivity and re-
flectance are investigated in more detail in Section 3.4.

3.2.1 Kirchhoff’s law

Kirchhoff’s law can be summarized as: 2,11–13 “For an object in thermody-


namic equilibrium with its surroundings, the absorptivity α of an object is
exactly equal to its emissivity , in each direction and at each wavelength
λ ( T, θ, ϕ) = αλ ( T, θ, ϕ).” This statement essentially means that a good
radiator is also a good absorber.
The statement of equality between emissivity and absorptivity is not
unconditional — spectral and angular variations in emissivity may lead to
70 Chapter 3

Table 3.2 Definitions of various forms of emissivity.

Quantity Symbol Definition


L λs ( θ,ϕ)
Spectral directional emissivity λ (θ, ϕ) Lbbλ ( θ,ϕ)
∞
0 L λs ( θ,ϕ ) dλ
Directional total emissivity (θ, ϕ) ∞
0 Lbbλ ( θ,ϕ ) dλ
 π/2  2π
L λs ( θ,ϕ) cos θ dϕ dθ
Spectral hemispherical emissivity λ  π/2
0
 2π
0

0 0
Lbbλ ( θ,ϕ) cos θ dϕ dθ
 ∞  π/2  2π
L λs ( θ,ϕ) cos θ dϕ dθ dλ
Hemispherical total emissivity  ∞
0
 π/2
0
 2π
0

0 0 0 Lbbλ ( θ,ϕ) cos θ dϕ dθ dλ

apparent violations of Kirchhoff’s law if not accounted for, as shown in


Table 3.2.
In Section 2.3.4 it is stated that 1 = α + τ + ρ. For an opaque sur-
face τ = 0, and it follows that α =  = 1 − ρ, which implies that a good
absorber/radiator has a low reflectivity. Because a blackbody has an emis-
sivity of unity, the reflectance is zero, resulting in a surface with a black
appearance (hence the name blackbody).
For a gaseous radiator ρ = 0, and it follows that α =  = 1 − τ, which
implies that a gas medium with low transmittance will radiate with high
emissivity. In the extreme case where the transmittance is very low, e.g.,
1%, the emissivity is very high, approaching a blackbody radiator.

3.2.2 Flux transfer between a source and receiver

Equation (2.31) describes the flux flowing in a radiance field, passing


through both surfaces. As stated, this equation does not consider the ori-
gin of the flux nor the destination of the flux. Armed with the concepts
of emissivity and absorption, now add a radiator element with emissivity
0 behind dA0 and an absorber element with absorption α1 behind dA1 , as
shown in Figure 3.5. In this construction, the flux leaving from a radiator
at the source, being absorbed by another absorber at the receiver, is given
by
Lbb 0 dA0 cos θ0 α1 dA1 cos θ1
d2 Φ = . (3.25)
R2
Equation (3.25) defines only the flux from a small area dA0 being ab-
sorbed in dA1 . Note that there could be flux from many other small sources
also absorbed in the receiver dA1 .
Sources 71

θ0

dA0 R01 θ1
0

dA1
α1
Figure 3.5 Radiative flux between a source and receiver.

3.2.3 Grey bodies and selective radiators

A grey body radiator is a thermal radiator with a spectrally invariant emis-


sivity less than unity. The grey body radiation at any wavelength is there-
fore a constant fraction of the blackbody radiation at the same temperature.
In practice, no physical object is a true grey body as defined above. But
if the emissivity is reasonably constant over the spectral range of interest,
the object is referred to as a grey body radiator over that spectral band. Ex-
amples of grey bodies include most natural objects, with emissivity mostly
ranging from 0.5 to 0.99 in the 8–12-µm spectral range.
A selective radiator is a radiator with spectrally variant emissivity.
The spectral emissivity varies slowly or very abruptly in the spectral range
of interest. Examples of selective radiators include molecular gas emission
lines, the wavelengths of which are related to the differences in the energy
states in the gas and are thus characteristic of the gas composition. These
lines can be very narrow and are usually found in clusters.
Figure 3.6 illustrates the concepts of black, grey, and selective radi-
ators. For any thermal radiator, Planck’s law sets an upper limit to the
radiation emanating from the radiator. Examples of spectral emissivity are
shown in Figure 3.7. The water vapor emissivity values were calculated
for a 100-m path in a Modtran™ 14 Tropical atmosphere at sea level. The
significance of this curve is that even for short paths the water vapor in
the atmosphere radiates at near-blackbody radiance at some wavelengths.
The CO2 emissivity curve was measured by placing a Bunsen burner flame
near the calibration port of a Fourier transform spectrometer.
Spectral radiator emissivity is an aggregate of many narrow ‘lines,’
72 Chapter 3

1.0
Blackbody
Emissivity

Grey body

Selective
radiator
0
1
Blackbody
Normalized
Exitance

Grey
body

Selective
radiator
0
0 5 10
Wavelength [mm]
Figure 3.6 Blackbody, grey body, and selective thermal radiators.

Atmospheric H2O emissivity: 100-m path length tropical atmosphere


1 1
Emissivity

0.5
0.5
0

0
0.6 0.8 1 2 3 4 5 6 7 8 9 10
Wavelength [mm]
Bunsen-burner flame CO2 emissivity
1
Emissivity

Peak value scaled


0.5 by flame optical
depth

0
3 3.5 4 4.5 5 5.5 6
Wavelength [mm]

Figure 3.7 Spectral emissivity for H2 O and CO2 .


Chapter 3: Sources 73

A1 1T1
1 L 1

(1-2) 1 L1
2 L 2
A2 2T2

Figure 3.8 Dome enclosing a small component.

where each line corresponds to a discrete energy level in the molecule or


atom. All of the molecules are at slightly different temperatures, with the
result that the individual lines are also slightly displaced. Figure 3.7 shows
an example of the line structure in a small spectral range.

3.2.4 Radiation from low-emissivity surfaces

The self-radiation from a single surface of given temperature depends on


the emissivity of the surface. If the emissivity is low, the self-radiation will
be concomitantly low. If the surface is opaque, a low emissivity implies a
high reflectivity, which in turn means high reflection of the ambient light.
Consider the geometry defined in Figure 3.8, with surface 1 having high
emissivity 1 and temperature T1 , and surface 2 having a low emissivity 2
and physical temperature T2 . Assuming Lambertian surfaces, the radiance
of surface 2 is

L2 = 2 Le ( T2 ) + (1 − 2 )1 Le ( T1 ). (3.26)

Noncontact infrared temperature measurement estimates an object’s


temperature, called the ’apparent’ or ‘radiation’ temperature, by compar-
ing the radiance of the object with the radiance of a reference source (nor-
mally a blackbody). The user adjusts the emissivity setting in the instru-
ment to match the object’s emissivity. This setting can be in error (e.g., if
the object emissivity is not known), leading to an incorrect temperature
estimate. If the object’s emissivity is estimated to be m2 , the apparent or
radiation temperature Tm2 can be determined by solving
4
m2 σe Tm2 = 2 σe T24 + (1 − 2 )1 σe T14 . (3.27)

Even if m2 = 2 , the apparent temperature measurement is not the physi-


cal surface temperature. Most noncontact temperature measurement probes
are set to assume a high source emissivity. If it is assumed that m2 = 1
74 Chapter 3

θ
Reflections

D
d

(a) (b)

Figure 3.9 Multiple reflections in a cavity: (a) simple air-filled cavity, and (b) reflectance of a
light ray entering the graphically unfolded cavity.

and 1 = 1, the apparent temperature of surface 2 varies between T1 (when


2 = 0) and T2 (when 2 = 1), as is evident from
4
Ta2 = 2 T24 + (1 − 2 ) T14 .
In practical terms, this means that a noncontact radiation temperature
measurement of a low-emissivity surface requires an accurate estimate of
the surface emissivity.

3.2.5 Emissivity of cavities

The definition of an optical cavity is a volume of given refractive index


inside a larger volume of different refractive index. Consider a simple air-
filled cavity in a solid block as shown in Figure 3.9(a). The reflectance of a
light ray entering the cavity is graphically unfolded in Figure 3.9(b). The
total reflectance of the cavity is given by ρ N , where ρ is the reflection at
a single surface, and N is the number of reflectance events. Because the
block is opaque (τ = 0), the emissivity of the cavity after N reflections is
given by  = 1 − ρ N . If ρ < 1, the emissivity increases with increase in
the number of reflections. In the limit, when n → ∞,  → 1. Cavities
therefore exhibit higher emissivity than that of the surfaces forming the
cavity. Cavities can be constructed, such as using a cavity to increase the
apparent emissivity of a thermal source; careful cavity design can result in
near-unity emissivity. Cavities also occur commonly in nature in the form
of micro-cavities on the surface of rough objects (discussed in Section 3.4).
Sources 75

Source Aperture Receiver

1 2
Ds Da Dd

Ra Rd

Figure 3.10 Aperture plate beam vignetting.

3.3 Aperture Plate in front of a Blackbody

Laboratory blackbodies are commonly used with well-defined aperture


plate diameters to obtain sources with well-defined areas, such as the con-
figuration shown in Figure 3.10. The requirement is that, through the
aperture plate, every elemental area of the source must irradiate every ele-
mental area of the receiver. Put differently, when the receiver observes the
source through the aperture plate, it must only see some part of the source
and nothing beyond the source. If this condition is met, the area of the
source becomes irrelevant, and the area of the aperture plate applies. It is
as if the source surface is located in the plane of the aperture plate, irre-
spective of where the physical source is located. This condition occurs as
a result of the spatial conservation of radiance in a nonlossy medium: the
(assumed uniform) radiance on the source radiating surface is the same as
the radiance in the plane of the aperture plate.

3.4 Directional Surface Reflectance

Surface micro-roughness, at the scale of the wavelength of light, has a sig-


nificant effect on reflection and radiant exitance from the surface. The sur-
face can be considered to consist of a great many micro-facets. Figure 3.11
shows four surface-roughness cases 15 in terms of the ratio of root-mean-
square (rms) roughness σ to the wavelength λ of the incident light. The
roughness is not only a measure of vertical variation, in most surfaces the
horizontal scale also varies with vertical scale; i.e., it is unlikely to have
large vertical variations with very small horizontal intervals.
76 Chapter 3

The surface can be smooth [Figure 3.11(a)], or it can be more complex


[Figure 3.11(d)]. Such complex surfaces may have semi-transparent multi-
layers, each with its own surface irregularities and volumetric properties
(e.g., oxides, synthetic or natural thin films), whereas other surfaces could
contain stacked particles forming porous volumes (e.g., sandpaper, soot,
or dust). Constructed light traps 16 have surfaces with geometric shapes to
capture light by means of multiple reflections, aligned such that the light
may never escape from the surface.
An optically flat surface, where σ/λ ≈ 0 [shown in Figure 3.11(a)],
reflects light according to geometrical optics because the near-zero surface
roughness has little scattering effect. At the other extreme where σ/λ > 1
[shown in Figure 3.11(d)], the roughness scale exceeds the wavelength of
light and the surface micro-facets act as independent small mirrors, each
reflecting according to geometrical optics. Between these two extremes
[Figures 3.11(b) and (c)], the surface roughness scatters the light in spatial
directions ranging from specular (broadened around the mirror vector) to
perfectly diffuse (Lambertian). In this region the bidirectional reflection
distribution function (BRDF) 17 is used to describe the spatial reflectance.
Various BRDF models are used, some based on empirical approximation,
whereas others attempt to model rigorously from first principles. In prac-
tice the BRDF is most easily obtained by measurement 18 or a combination
of measurement and theoretical modeling. 15,19
A Lambertian reflector has an important surface property that the re-
flected (or emitted) radiance (what humans perceive as visual brightness) is
the same in all directions, irrespective of the direction or source of the inci-
dent light. One example of a Lambertian surface is an opaque object with
a micro-scale rough and porous surface, where scattering from subsurface
roughness dominates.

3.4.1 Roughness and scale

Figure 3.11 assumes an illumination beam size comparable with the mi-
croscopic scale of a homogenous surface. A similar principle also applies
to aggregate properties of composite surfaces. Consider a satellite cam-
era viewing a crop field. In this case the sun is the illuminating ‘beam,’
and a single satellite-sensor pixel FOV observes an area comprising crop
and soil. The camera pixel FOV footprint covers several rows of the crop.
The observed radiance will vary depending on the sun-field-camera view-
ing geometry, i.e., illumination and/or viewing along or across the rows
of plants. Thus, depending on the application, roughness is not only ex-
pressed in terms of wavelength scale but also in FOV footprint scale.
Sources 77

Mirror reflection Specular reflection Diffuse reflection Geometric reflection


Lambertian

is the root-mean-square surface roughness


is the wavelength of the light

Geometric optics Bidirectional reflection function Geometric optics


(a) (b) (c) (d)

Figure 3.11 Micro-scale surface roughness and reflection.

3.4.2 Reflection geometry

The geometry describing reflection is shown in Figure 3.12. Let the in-
cident ray be defined by the unit vector  I, the surface normal vector by
 and the reflected ray vector by the unit vector R.
the unit vector N,  It is
shown in Appendix C that the direction of the reflected ray is given by
 = 
R I − 2(   ) N.
I·N  In this definition R represents the mirror reflection
vector from the surface. Consider now the direction of the measured flux
 The vector S has an angle α with
as reflected along any arbitrary vector S.
 (where cos α = S · R)
respect to the mirror reflection vector R  and an angle
θs with respect to the surface normal vector N (where cos θs = −S · N).
 

The incident ray vector I has an angle θi with respect to the surface normal
vector N  (where cos θi = −   The mirror reflected ray vector R
I · N).  has an

angle θr with respect to the surface normal vector N (where cos θr = R  · N).


3.4.3 Reflection from optically smooth surfaces

The reflection from an optically smooth surface σ/λ ≈ 0 is determined


by the material’s index of refraction. 20 The Fresnel equation describes the
78 Chapter 3

Normal N N
vector R Mirror
reflection

Incident
ray ωs L S
I θr Reflected LI ωi
S ray θi θs
θi α
θs
ϕr

ϕi

ϕs

Figure 3.12 Reflection geometry.

reflection as a function of incident angle for both conducting metals and


nonconducting dielectric materials. Whereas dielectrics (e.g., water, glass)
have real and small indices of refraction (low reflectance), metals have
complex and large indices of refraction (high reflectance). The direction of
the reflected light from the smooth surface will be the mirror reflection R
in Figure 3.12.
A conducting medium can be modeled as a gas of unbound charges
circulating in the medium (electrons in a metal). 20 These free electrons and
their accompanying positive nuclei can undergo ‘plasma oscillations’ at a
resonant plasma frequency νp . The refractive index for a metal is then
given by n2ν = 1 − (νp /ν)2 (see Section 5.5.8). When ν > νp , n is real,
and the metal becomes transparent. When ν < νp , n is complex, and
the imaginary component leads to absorption into the material. However,
most of the light energy is not dissipated, and the wave is reflected from
the surface. Figure 3.13 shows the spectral reflectance of metal surfaces at
normal incidence angle.

3.4.4 Fresnel reflectance

The Fresnel equation for reflection from a dielectric or metal surface de-
pends on the polarization of the incident light relative to the plane of
the surface. 20,22 For polarized light perpendicular to the surface, the re-
flectance from a single surface is given by
 
ni cos θi − nt cos θt 2
ρ⊥ = , (3.28)
ni cos θi + nt cos θt
Sources 79

1
Al
Ag
Reflectance

Calculated from the metal’s


complex index of refraction
0.5
Cu
Au

0
0.1 1 10
Wavelength [mm]

Figure 3.13 Metal spectral reflection at normal incidence angle. 21

where θi is the angle of incidence of the light ray (relative to the surface
normal vector), ni is the refractive index of the medium hosting the inci-
dent ray, θt is the angle between the refracted ray and the surface normal
vector, and nt is the refractive index of the dielectric or metal.
The reflectance of polarized light parallel to the surface is given by
 
nt cos θi − ni cos θt 2
ρ = . (3.29)
nt cos θi + ni cos θt
The reflectance of unpolarized light is given by
ρ + ρ⊥
ρ= . (3.30)
2
The relationship between incident angle and refracted angle is given by
Snell’s law (see Section 5.5.8),
ni sin θi = nt sin θt , (3.31)
and the transmittance through the surface is given by τ = 1 − ρ because
there is no absorption in the surface itself. Figure 3.14 shows the angu-
lar variation of Fresnel reflectance for a number of dielectric and metallic
materials.
The Fresnel reflection equations, Equations (3.28) and (3.29), provide
the reflection from a single surface, such as from an opaque surface. Trans-
parent dielectric media, such as a plate of glass with two smooth surfaces,
will reflect on both surfaces (see the figure in Problem 3.9). Assuming the
same medium on both sides of the dielectric plate (i.e., air), the second
surface reflects the same as the first surface. The medium’s reflectance can
be calculated by accounting for the successive reflectance by each surface,
as well as the transmittance through the medium.
80 Chapter 3

Aluminium: nt = 0.81257 + i 6.0481


Reflectance

Copper: nt = 1.13251 + i 2.5583

0.5
Gold: nt = 0.855 + i 1.8955

Diamond: nt = 2.43236 Air ni = 1


Water: nt = 1.335 Sample = nt
BK7 glass: nt = 1.52141
λ = 0.5 mm
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
Incidence angle θi [rad]

Figure 3.14 Fresnel reflection as function of incidence angle (single surface in air). 21

3.4.5 Bidirectional reflection distribution function

The reflection from surfaces with 0 < σ/λ < 1 does not follow the geomet-
ric laws of reflection; a more-complex function is required. The BRDF 11,23
defines how light is reflected at an irregular or rough surface, such as
shown in Figures 3.11(b) and (c). BRDF varies with wavelength — in the
following discussion the monochromatic BRDF at a single wavelength is
considered.
BRDF is defined as the ratio of reflected radiance LS(dωs ) in a small
solid angle dωs along a view vector S to the incident irradiance EI (dωi ) in
a small solid angle dωi along the incidence vector  I. Note that an infinites-
imally small solid angle dωi is considered and furthermore that the source
surface with radiance L I uniformly fills dωi ; hence the source surface ori-
entation is irrelevant, and L I dΩi = L I dωi .
As shown in Figure 3.12, the direction of each of the two small solid
angles is defined by the respective azimuth angles ϕi and ϕs and the zenith
angles θi and θs . BRDF is therefore a four-dimensional function. Defined
as L/E, BRDF has units of [1/sr]:
LS(dωs ) LS(dωs )
BRDF = f r (dωi → dωs ) = = . (3.32)
EI (dωi ) L I (dωi ) cos θi dωi

Some BRDFs are isotropic when rotated around the normal N,  yielding a
three-dimensional function fr (θi , θs , ϕi − ϕs ), whereas others are anisotropic.
The theoretical requirements for the BRDF function include positivity:
fr (θi , θs , ϕi , ϕs ) ≥ 0; (3.33)
Sources 81

it must conserve energy:



fr (θi , θs , ϕi , ϕs ) cos(θs )dωs ≤ 1; (3.34)
Ω

and it must be reciprocal:

f r (dωi → dωs ) = f r (dωs → dωi ). (3.35)

A physical BRDF may not be expressible in mathematical format, thus


a number of computationally viable approximations have been proposed.
Often, these approximations do not comply with the theoretical require-
ments for the BRDF statement (e.g., conservation of energy or reciprocal-
ity). The form of such approximations depends on the domain of appli-
cation and the properties of the surface. This surface could be the surface
of a single object, or it could be an aggregation of a number of smaller
objects, e.g., a forest tree canopy comprising many trees. Even for a sin-
gle object, the surface roughness could be an aggregation of smaller peaks
and valleys. The spatial structure in the surface roughness gives rise to the
shape of the BRDF function.
At the one extreme, the mathematically simplest BRDF is diffuse
Lambertian reflection, where f r (dωi → dωs ) = kd is a constant value.
Assuming that the BRDF reflects a fraction ρ of the incident light, then by
the conservation of energy property [and following a derivation similar to
Equation (2.10)]:

ρ = fr (θi , θs , ϕi , ϕs ) cos(θs )dωs
Ω
 2π  π/2
= kd cos(θs )dθs sin θs dϕs
0 0
= kd π, (3.36)

and hence f r,Lambertian (dωi → dωs ) = ρ/π.


At the other extreme, the mirror reflection of a light ray  I changes
the propagation direction to R  (with no light lost in any other direction).
The BRDF is zero for all angles, except at θs = θi and ϕs = ϕs ± π. By
conservation of energy, it can be shown that the BRDF for a perfect mirror
is
ρ s ( θ i ) δ ( θ i − θ s ) δ ( ϕ i − ϕ s ± π)
f r,Mirror (dωi → dωs ) = , (3.37)
cos θi
where δ() is the Dirac delta function, and ρs (θi ) is the specular surface
reflectance at the angle θi . The presence of the cos θi factor deserves further
82 Chapter 3

consideration. From Equations (3.32) and (3.37) the reflected radiance from
a mirror is
ρ s ( θ i ) δ ( θ i − θ s ) δ ( ϕ i − ϕ s ± π)
LS(dωs ) = L I (dωi ) cos θi dωi
cos θi
= ρs (θi ) L I (dωi ). (3.38)

Most real-world surfaces are neither Lambertian nor perfect mirrors.


It is almost impossible to find analytic formulations for the BRDF of sur-
faces with complex micro-scale qualities. Three approaches are used for
modeling BRDF: (1) theoretical models, attempting to model underlying
processes from first principles; (2) descriptive models, attempting to fit
analytical curves to trends in measured data; and (3) data-driven lookup
tables based on ensemble averages of measured data.
BRDF descriptions for the visual and near-infrared spectrum include
the descriptive models by Phong 24 (fit of a cosinen shape), Blinn–Phong, 25
Ward 26 (fit of a Gaussian shape), Lafortune, 27 and Ashikhmin. 28 Theoret-
ical models include the work by Cook–Torrance, 29,30 Torrance–Sparrow, 31
He, 32 and Oren and Nayar. 33
Models developed or used 34 for the infrared spectrum include de-
scriptive models by Conant and LeCompt, 35 Ashikhmin, 28 and Sandford–
Robertson. 36 Theoretical models include models by Priest–Germer, 37 Cail-
lault, 38 Beard–Maxwell, 39 Cook–Torrance, 29,30 and Snyder and Wan. 19
The relatively simple Phong phenomenological BRDF model that in-
cludes diffuse and specular components is
ρd ρ (n + 1) cosn α
fr,Phong = + s , (3.39)
π 2π cos θi
where the angles are defined in Figure 3.12, ρd is the diffuse reflection con-
stant, n determines the angular divergence of the lobe, and ρs determines
the peak value or ‘strength’ of the lobe. Energy conservation requires that
ρs + ρd = ρ, where ρ = Φr /Φi is the total reflected flux divided by the
total incident flux. The Phong model is not a mathematically compliant
BRDF because for large α the BRDF value could be negative (i.e., the lobe
enters below the surface) — and if the BRDF value is set to zero for such
cases, the law of energy conservation is violated. The Phong model there-
fore fails at large α angles and for small n. The Phong model also does
not comply with the requirement for reciprocality. There are several varia-
tions on the Phong theme that attempt to achieve increased accuracy. Four
typical Phong specular reflection profiles are shown in Figure 3.15.
The Cook–Torrance BRDF model 29 considers the surface to consist of
a large number of micro-facets, with mirror reflections off each facet. The
Sources 83

ρd = 0.6 Diagrams are not drawn to the same scale


ρs = 0
n=0
ρd = 0.4
ρs = 0.1
n=5

ρd = 0.49
ρs = 0.05
n = 30
ρd = 0.3
ρs = 0.2
n = 50

Figure 3.15 BRDF calculated by the simple Phong equation.

theoretically derived model is too involved for detailed coverage here. In


summary, the BRDF is given by
ρd ρs Fλ DG
fr,Cook−Torrance = + , (3.40)
π  · S)( N
π( N ·
I)
where ρd + ρs = ρ, D is the distribution function of micro-facet orienta-
tions, G is a geometrical attenuation factor to account for masking and
shadowing, and Fλ ≈ [1 + (S · N
 )]λ models the reflection for each micro-
facet. Physics-based models such as the Cook–Torrance are more accurate
but also require much more extensive modeling and run-time calculation.
A series of BRDF measurements 18 were made available on the Inter-
net. 40 Figure 3.16 shows a few samples from the database. Comparison
with Figure 3.15 indicates that the Phong model is limited in its ability to
model real materials. In particular, the ‘Red fabric 2’ sample has a signif-
icant amount of back reflection, which the Phong model cannot provide.

3.5 Directional Emissivity

The preceding section clearly demonstrates surface directional reflectance


properties. By the same physical mechanisms, emissivity can also have
directional properties. For an opaque surface the conservation of flux
requires that (θS , ϕS ) = 1 − ρ(θS , ϕS ). Therefore, much that applies to
directional reflectance also applies to directional emissivity.
84 Chapter 3

Diagrams are not drawn to the same scale

Pure rubber
Pearl paint

Red fabric 2

Fruitwood 241
wood stain

Figure 3.16 BRDF measured in the visual spectrum. 18

Ignoring the incidence and reflection vectors 


I and R in Figure 3.12,
the spectral directional emissivity of a thermal radiator along vector S is
given by
Lλ ( T, θS , ϕS )
λ ( θ S , ϕ S ) = , (3.41)
Lbbλ ( T )
where Lλ ( T, θS , ϕS ) is the source radiance, and Lbbλ ( T ) is the blackbody
radiation at the same temperature T.
The definition of the spectral hemispherical emissivity is
 2π  π/2
Mλ ( T ) LλS (θS , ϕS , T ) cos θS sin θS dθS dϕS
λ = =  02π  0π/2 , (3.42)
Mbbλ ( T ) Lbbλ (θS , ϕS , T ) cos θS sin θS dθS dϕS
0 0

and by using Equation (3.41) it follows that the spectral hemispherical


emissivity is given by
 2π  π/2
λ (θS , ϕS ) cos θS sin θS dθS dϕS
λ = 0 0
 2π  π/2 (3.43)
0 0 cos θ S sin θ S dθ S dϕ S
 2π  π/2
1
= λ (θS , ϕS ) cos θS sin θS dθS dϕS . (3.44)
π 0 0

The directional emissivity of a smooth, metallic surface can be cal-


culated from the Fresnel reflectance equations, using τ + ρ +  = 1 and
Sources 85

1
1 mm
0.9 4 mm
Reflectivity

0.8 10 mm

0.7 0.5 mm
0.6
0.5
0.5
0.4
Emissivity

0.5 mm
0.3

0.2 10 mm
4 mm
0.1 1 mm
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
Incidence angle [rad]

Figure 3.17 Directional reflectance and emissivity for a smooth, gold surface.

noting that τ = 0 except for very thin layers. Figure 3.17 shows the di-
rectional reflectivity and emissivity of a smooth, gold surface, calculated
from refractive index data. 21 It is evident that the surface emissivity is ap-
proximately constant for incidence angles up to 0.8 rad, but the angular
variation at larger incidence angles could increase (at longer wavelengths)
but eventually decrease to zero at zero-grazing angles. Reviewing the Fres-
nel reflectance for dielectrics in Figure 3.14, it is evident that the dielectric
surface emissivity ( = 1 − ρ) stays approximately constant up to incidence
angles of 1 rad, whereafter it decreases to zero for a π/3 rad incidence an-
gle.
Dielectrics and metals both exhibit directional emissivity, as shown
above. It is evident from Figure 3.17 that at longer infrared wavelengths,
the hemispherical emissivity will not differ significantly from the direc-
tional emissivity at normal incidence angle. 41 The ratio of hemispherical
emissivity to directional emissivity r = /(0, 0) for metals is rarely out-
side 1 ≤ r ≤ 1.3 except at large incidence angles (see Figure 3.17). For
nonconductor dielectrics the ratio is generally 0.95 ≤ r ≤ 1. It is clear from
Figure 3.17 that this generalization does not hold in the visual spectral
band. Note that for a Lambertian radiator r = 1.

3.6 Directional Reflectance and Emissivity in Nature

In Section 3.4 it is shown how (random) surface roughness affects the sur-
face reflectance. Surfaces with directionally structured roughness may also
exhibit directional reflectance and emissivity. An example of such a struc-
86 Chapter 3

Soil 1

Corn 2

Figure 3.18 Structured scene content in a corn field.

ture is a field with rows of corn, shown in Figure 3.18. The soil has emis-
sivity 1 , whereas the corn has emissivity 2 . In the along-row direction
(A), the projected area of the corn is small, and the observer sees mostly
ground. In the cross-row direction (C), at low-elevation angles, only the
corn is visible and not the ground. Directional emissivity is observed in en-
vironments such as crop lands, snow, ground quartz sand, i.e., any surface
with one or more materials in ordered spatial structure.

3.7 The Sun

The sun plays a major role in optical signatures by surface reflectance at


shorter wavelengths, and increased self-exitance at longer wavelengths be-
cause of increased surface temperature. In this section a simple model for
reflected sun radiance is derived. The distance between the sun and the
earth is approximately Rsun = 149 × 106 km. The sun’s diameter is approx-
imately 1.39 × 106 km. Simple calculation shows that the sun’s angular
size, subtended from the earth, is approximately 0.534 degrees (68.3 µsr).
The sun’s surface radiance can be modeled by a thermal radiator with a
temperature Ts of 5800 K to 5900 K, even though the temperature inside
the sun is much higher.
The solar irradiance on an object on the earth’s surface is given by
s Lbbλ ( Ts ) Asun τs cos θi
Eλsun = , (3.45)
R2sun
where s ≈ 1 is the emissivity of the sun’s surface, Ts is the sun’s sur-
face temperature, τs is the atmospheric transmittance between the sun and
Sources 87

the surface, and θi is the angle between the surface normal and the sun
vector. The reflected sun radiance from a perfectly Lambertian surface is
then given by Lλ = ρd Eλsun /π, where ρd is the surface diffuse reflectance
function. The reflected radiance is then given by
s Lbbλ ( Ts ) As τs ρd cos θi
Lλ = (3.46)
πR2sun
= ψs Lbbλ ( Ts )τs ρd cos θi , (3.47)

where ψ = Asun /(πR2sun ) = 2.1757 × 10−5 [sr/sr] follows from the geome-
try. The sun geometry factor is an inverse form of the view factor described
in Section 2.8.

Bibliography
[1] Dereniak, E. L. and Boreman, G. D., Infrared Detectors and Systems,
John Wiley & Sons, New York (1996).

[2] Boyd, R. W., Radiometry and the Detection of Optical Radiation , John
Wiley & Sons, New York (1983).

[3] Wikipedia, “Planck’s Law,” http://en.wikipedia.org/wiki/Planck’


s_law.

[4] Toyozawa, Y., Optical Processes in Solids, Cambridge University Press,


Cambridge, UK (2003).

[5] Wolfe, W. L. and Zissis, G., The Infrared Handbook, Office of Naval
Research, US Navy, Infrared Information and Analysis Center, Envi-
ronmental Research Institute of Michigan (1978).

[6] SpectralCalc, GATS Inc., “Radiance: Integrating the Planck Equation,”


http://www.spectralcalc.com/blackbody/integrate_planck.html.

[7] Gradshteyn, I. S. and Ryzhik, I. M., Tables of Integrals, Series and Prod-
ucts, Academic Press, New York (1981).

[8] Mohr, P. J., Taylor, B. N., and Newell, D. B., “CODATA recommended
values of the fundamental physical constants: 2010,” Rev. Mod.
Phys. 84(4), 1527–1605 (2012) [doi: 10.1103/RevModPhys.84.1527].

[9] SciPy, “SciPy Reference Guide: Constants (scipy.constants),” http:


//docs.scipy.org/doc/scipy/reference/constants.html#codata2010.

[10] Pyradi team, “Pyradi Radiometry Python Toolkit,” http://code.


google.com/p/pyradi.
88 Chapter 3

[11] Palmer, J. M. and Grant, B. G., The Art of Radiometry, SPIE Press,
Bellingham, WA (2009) [doi: 10.1117/3.798237].

[12] Wyatt, C. L., Radiometric Calibration: Theory and Methods, Academic


Press, New York (1978).

[13] Lienhard IV, J. H. and Lienhard V, J. H., A Heat Transfer Textbook,


Phlogiston Press, Cambridge, MA (2003).

[14] Spectral Sciences Inc. and U. S. Air Force Research Laboratory,


“MODTRAN,” modtran5.com.

[15] Wen, C. and Mudawar, I., “Modeling the effects of surface roughness
on the emissivity of aluminum alloys,” International Journal of Heat and
Mass Transfer 49, 4279–4289 (2006).

[16] Black, W. Z. and Schoenhals, R. J., “A study of directional radiation


properties of specially prepared V-groove cavities,” Journal of Heat
Transfer 90, 420–428 (1968).

[17] Nicodemus, F. E., “Normalization in Radiometry,” Applied Optics 12,


2960–2973 (1973).

[18] Matusik, W., Pfister, H., Brand, M., and McMillan, L., “A Data-Driven
Reflectance Model,” ACM Transactions on Graphics 22(3), 759–769 (July
2003).

[19] Snyder, W. C. and Wan, Z., “BRDF Models to Predict Spectral Re-
flectance and Emissivity in the Thermal Infrared,” IEEE TRANS-
ACTIONS ON GEOSCIENCE AND REMOTE SENSING 36, 214–225
(1998).

[20] Hecht, E., Optics, 4th Ed., Addison Wesley, Boston, MA (2002).

[21] Polyanskiy, M., “RefractiveIndex Info,” http://refractiveindex.


info/.

[22] Born, M. and Wolf, E., Principles of Optics, 7th Ed., Pergamon Press,
Oxford, UK (2000).

[23] Nicodemus, F. E., Richmond, J. C., Hsia, J. J., Ginsburg, I. W., and
Limperis, T., “Geometrical considerations and nomenclature for re-
flectance,” NBS monograph 160, National Bureau of Standards (Octo-
ber 1977).

[24] Phong, B. T., “Illumination for computer generated pictures,” Com-


munications of ACM 6, 311–317 (1975).
Sources 89

[25] Blinn, J. F., “Models of light reflection for computer synthesized pic-
tures,” 4th Annual Conference on Computer Graphics and Interactive Tech-
niques 192 (1977) [doi: 10.1145/563858.563893].

[26] Ward, G. J., “Measuring and modeling anisotropic reflection,” SIG-


GRAPH 92 (1992) [doi: 10.1145/133994.134078.].

[27] Lafortune, E., Foo, S., Torrance, K., and Greenberg, D., “Non-linear
approximation of reflectance functions,” SIGGRAPH 97 (1997).

[28] Ashikhmin, M. and Shirley, P., “An Anisotropic Phong BRDF Model,”
Journal of Graphics Tools 5, 25–32 (2000).

[29] Cook, R. and Torrance, K., “A reflectance model for computer graph-
ics,” SIGGRAPH 15, 301–316 (1981).

[30] Cook, R. L. and Torrance, K. E., “A Reflectance Model for Computer


Graphics,” ACM Transactions on Graphics 1, 1, 7–24 (January 1982).

[31] Torrance, K. and Sparrow, E., “Theory for Off-Specular Reflection


from Roughened Surfaces,” J. Optical Soc. America 57, 1105–1114
(1976).

[32] He, X., Torrance, K., Sillon, F., and Greenberg, D., “A comprehensive
physical model for light reflection,” Computer Graphics 25, 175–186
(1991).

[33] Nayar, S. and Oren, M., “Generalization of the Lambertian Model and
Implications for Machine Vision,” International Journal on Computer Vi-
sion 14, 227–251 (1995).

[34] Brady, A. and Kharabash, S., “Further Studies into Synthetic Im-
age Generation using CameoSim,” Tech. Rep. DSTO-TR-2589, Intel-
ligence, Surveillance and Reconnaissance Division Defence Science
and Technology Organisation (2011).

[35] Dudzik, M. C., Ed., The Infrared and Electro-Optical Systems Handbook:
Electro-Optical System Design, Analysis and Testing , Vol. 4, ERIM and
SPIE Press, Bellingham, WA (1993).

[36] Sandford, B. P. and Robertson, D. C., “Infrared Reflectance Prop-


erties of Aircraft Paints,” tech. rep., Philips Laboratory, Geophysics
Directorate/G-POA (August 1994).

[37] Priest, R. G. and Germer, T. A., “Polarimetric BRDF in the Microfacet


Model: Theory and Measurements,” 2000 Meeting of the Military Sens-
ing Symposia Specialty Group on Passive Sensors (2002).
90 Chapter 3

[38] Caillault, K., Fauqueux, S., Bourlier, C., and Simoneau, P., “Infrared
multiscale sea surface modeling,” Proc. SPIE 6360, 636006 (2006) [doi:
10.1117/12.689720].
[39] Maxwell, J., Beard, J., Weiner, S., and Ladd, D., “Bidirectional re-
flectance model validation and utilization,” Tech. Rep. AFAL-TR-73-
303, Research Institute of Michigan (ERIM) (October 1973).
[40] Matusik, W., “MERL BRDF Database,” http://www.merl.com/brdf/.

[41] Incropera, F. P., De Witt, D. P., Bergman, T. L., and Lavine, A. S.,
Fundamentals of Heat and Mass Transfer , 6th Ed., John Wiley & Sons,
New York (2007).

[42] Colors of heated steel, http://www.sizes.com/materls/colors_of_


heated_metals.htm.

[43] Wikipedia, “Planckian locus,” http://en.wikipedia.org/wiki/


Planckian_locus.

[44] Wikipedia, “CIE 1931 color space,”


http://en.wikipedia.org/wiki/CIE_1931_color_space.

[45] Kirk, R., “Standard Colour Spaces,” Technical Note FL-TL-TN-0139-


StdColourSpaces, FilmLight Digial Film Technology (2007).
[46] Wikipedia, “Standard illuminant,”
http://en.wikipedia.org/wiki/Standard_illuminant.

[47] Wikipedia, “White point,” http://en.wikipedia.org/wiki/White_


point.

[48] Pyradi team, “Pyradi data,” https://code.google.com/p/pyradi/


source/browse.

[49] Her, T., Finlay, R. J., Wu, C., Deliwala, S., and Mazur, E., “Microstruc-
turing of silicon with femtosecond laser pulses,” Applied Physics Let-
ters 73, 1673–1675 (1998).
[50] Brown, R. J. C., Brewer, P. J., and Milton, M. J. T., “The physical
and chemical properties of electroless nickel-phosphorus alloys and
low reflectance nickel-phosphorus black surfaces,” Journal of Materials
Chemistry 12, 2749–2754 (2002).

[51] Xi, J., Schubert, M. F., , anbd E. Fred Schubert, J. K. K., Chen, M., Lin,
S.-Y., Liu, W., and Smart, J. A., “Optical thin-film materials with low
refractive index for broadband elimination of Fresnel reflection,” Na-
ture Photonics 1, 176 – 179 (2007) [doi: doi:10.1038/nphoton.2007.26].
Sources 91

[52] Tsakalakos, L., Balch, J., Fronheiser, J., Shih, M.-Y., LeBoeuf, S. F.,
Pietrzykowski, M., Codella, P. J., Korevaar, B. A., Sulima, O., Rand, J.,
Davuluru, A., , and Rapol, U., “Strong broadband optical absorption
in silicon nanowire films,” Journal of Nanophotonics 1 (2007) [doi: DOI:
10.1117/1.2768999].

[53] University of California, S. B., “MODIS UCSB Emissivity Library,”


http://www.icess.ucsb.edu/modis/EMIS/html/em.html.

Problems

3.1 Calculate the color coordinates and show the approximate colors
for blackbodies at the following temperatures: 3200 K, 5000 K,
6500 K, and 9000 K. [4]
3.2 Calculate and plot graphically (more than 10 data points) the ra-
diance from the surface of a Planck thermal radiator at a temper-
ature of 1000 K, over the spectral range of 3–5 µm, as follows: (1)
in the wavelength domain (in units of [W/(m2 ·sr·µm)]), (2) in the
wavenumber domain (in units of [W/(m2 ·sr·cm−1 )]), and (3) con-
vert between the results obtained in (1) and (2) above, using the
conversion defined in Section 2.3.3. [6]
3.3 Calculate the amount of heat energy in joules flowing into a beef
steak on a outdoor barbecue grid from the moment it is put onto
the grid, until it is ready to eat. You may consult any source
(except fellow students) but provide the reference to the source.
Clearly state and motivate all assumptions. Apply the golden
rules to the problem, i.e., dimensional analysis, developing a good
mathematical definition, and drawing detailed diagrams of the
problem statement. Include all code or numerical files used in
the calculation. [6]
3.4 Ironsmiths use the color of a steel sample to estimate the temper-
ature of the sample. Subjective descriptions, such as ‘dull cherry-
red,’ are used to describe the temperature, as in the following ta-
ble: 42
92 Chapter 3

Temperature judged by color


Color Halcomb Howe White
◦C ◦C ◦C

Red heat, visible in the dark 400 470 .


Red heat, visible in the twilight 474 .
Red heat, visible in the daylight 525 475 532
Red heat, visible in the sunlight 581 556
Dark red 700 550–625 635
Dull cherry-red 800 677
Cherry-red 900 700 746
Bright cherry-red 1000 850 843
Orange-red 1100 899
Orange-yellow 1200 950–1000 941
Yellow-white 1300 1050 996
White welding heat 1400 1150 1079
Brilliant white 1500 1205
Dazzling white (bluish-white) 1600 . .
The objective with this investigation is to confirm the information
in this table and to seek a more ‘scientific’ manner to link color to
temperature.

3.4.1 Use the given color-matching function data to calculate the xy


chromaticity color coordinates for a thermal radiator (the Planck-
ian locus), with the following temperatures: T ∈ {500, 1000, 1500,
2000, 2500, 3000, 4000, 6000, 10000, 1×1010 } K. [4]
3.4.2 Confirm your calculated color coordinates against the values shown
in Figure A.1. Consult a properly colored chromaticity diagram 43,44
and comment on the ironsmith color table given above. [4]
3.4.3 Elaborate on the color referred to as the ‘white point.’ 45–47 How
stable is the color white? Give examples of acceptable white colors.
On the grounds of your research, comment on the eye’s value as a
scientific color instrument. [2]

3.5 Refer to Section 2.10.5 and repeat the calculations there with your
own code. The data is available on the pyradi website. 48

3.5.1 Calculate the color coordinates of the six samples when illumi-
nated by the four source spectral radiances.
Calculate and plot your own versions of the graphs in Figures 2.17
and 2.18. Enter the color coordinate values into the table below
and comment on your observations. Apply the Golden Rules
(Chapter 10) in the derivation of the solution. [9]
Sources 93

Tomato Lettuce Prune Leaf Glove Paper


Fluorescent
Sunlight
Incandescent
Sodium lamp
3.5.2 Plot the color coordinate locus for each sample when illuminated
by the different sources onto the color CIE diagram (Figure 2.19).
[4]
3.5.3 Explain why the three samples’ color coordinates cluster so closely
together when observed under the sodium lamp. Near which
monochromatic wavelength does this cluster occur? Why? When
buying paint or clothes, under which source illumination can you
best compare colors? [2]

3.6 Calculate the total flux flowing between two circular disks with
diameter 1 m, separated by 10 m. The first object has a tempera-
ture of 450 K and an emissivity of 0.5, and the second object has
a temperature of 450 K and an emissivity ranging from 0 to 1.0 in
steps of 0.1. Derive a mathematical equation for the net flux flow
and then plot the values. [6]
3.7 Calculate the total flux flowing between two circular disks with di-
ameter 1 m, separated by 10 m. The first object has a temperature
of 450 K and an emissivity of 0.5, and the second object has an
emissivity of 0.5 and a temperature ranging from 300 K to 600 K
in steps of 50 K. Derive a mathematical equation for the net flux
flow and then plot the values. [6]
3.8 An opaque object has a diffuse reflectance of 1.0 for wavelengths
from 0 to 0.5 µm; and 0 for wavelengths from 0.5 µm to infinity.
The object is illuminated by a source with temperatures ranging
from 1000 to 4000 K in steps of 1000 K. The object’s own inter-
nal temperature ranges from 1000 to 4000 K in steps of 1000 K.
Assume Planck law radiation in both cases. Calculate and plot
the object’s color coordinates for all of the temperature combina-
tions. Validate your results against Figure A.1. Comment on your
observations. [10]
3.9 In a thin, transparent dielectric medium, the incident flux is re-
flected between the medium’s surfaces, as shown below. The
medium itself has an internal transmittance τi .
94 Chapter 3

τ ρ
τ ρ τ ρ τ ρ τ ρ
τi τi τi τi τi τi τi τi
τ ρ τ ρ τ ρ τ ρ

3.9.1 Show that the reflection and transmittance magnitudes of the top
and bottom surfaces are the same. [4]
3.9.2 Derive an equation for the reflectance (sum of all components) and
transmittance of the medium. [6]

3.10 Calculate the solar irradiance at the top of the atmosphere. [3]
Calculate the total solar flux absorbed by the earth. Assume the
earth’s diameter to be 12,756.8 km. The earth has an average
albedo (reflectance) of 0.3. [5]
3.11 In Section 3.7 a model is developed for the reflected sunlight from
an object on the earth. Confirm Equation (3.47) by applying the
Golden Rule of dimensional analysis. [4]
3.12 Use Figure 3.4 to estimate the percentage of radiance in (1) the 3–
5-µm spectral band for a jet tailpipe, and (2) the 0.4–0.7-µm visual
spectral band for a tungsten lamp. [2]
Repeat, but calculate more accurately using a spectral integral of
the Planck law between the two wavelengths. [4]
3.13 Derive an equation for reflected sun radiance, similar to Equa-
tion (3.47), but for a diffuse surface modeled with the Phong BRDF
function. [2]
3.14 A hemispherical dome of material with emissivity 1 and temper-
ature T1 encloses a small component with area A2 , emissivity 2 ,
and temperature T2 (Figure 3.8). The objective with this investiga-
tion is to consider the net radiance emanating from area A2 . Both
materials are opaque with Lambertian surface properties.

3.14.1 Consider the enclosing area A1 . What effect do the geometrical


shape of A1 and the radius R have on the net radiance L2 from
A2 ? [2]
3.14.2 For emissivity 1 = 0.9, temperature T1 = 300 K and temperature
T2 = 350 K, calculate the radiance L2 over all wavelengths, for
emissivity 2 from 0 to 1 in steps of 0.1. [4]
Sources 95

3.15 A hole with diameter D and depth d is drilled into a metal block,
as shown in Figure 3.9. Assume the drill bit point angle to be
exactly 90 deg, forming a hole as shown in the figure. The surface
emissivity of the block and inside the hole is 0.2.

3.15.1 Calculate the emissivity of a hole of d/D = 5, for the ray shown, at
an angle θ ranging from 0 deg to 90 deg at 5-deg intervals. Ignore
the Fresnel reflection effect. [4]
3.15.2 Calculate the emissivity of a hole of d/D = 5, for the ray shown, at
an angle θ ranging from 0 deg to 90 deg at 5-deg intervals. Include
the Fresnel reflection effect. [4]
3.15.3 Assuming the block’s temperature to be 300 K, calculate the ra-
diance in the hole opening for an angle θ ranging from 0 deg to
90 deg at 5-deg intervals. [2]

3.16 Low-reflectance (i.e., high emissivity) surfaces can be constructed


by spatial structures that ‘trap’ the light by multiple reflections.
Such light traps typically have conical shapes (figure below 49 ),
wedge structures, porous surfaces, 50 or fiber structures. 51,52

3.16.1 Derive a mathematical description of the light path into a one-


dimensional wedge light trap. Elaborate on the number of bounces
and the mathematical requirements for the light wave never to
escape from the trap. Consider a geometrical structure in your
solution rather than a purely mathematical analysis. [4]
3.16.2 Design a one-dimensional wedge-shaped light trap to achieve an
emissivity of 0.98 at an incidence angle of 5 deg. The available
surface finish has a Lambertian emissivity of 0.5. [4]
3.16.3 For the above light trap, calculate the emissivity for angles from
normal incidence up to 90 deg at 5-deg intervals. Angles are de-
fined in the plane of the paper. [4]
3.16.4 Comment on the use of a two-dimensional wedge structure as a
surface for a laboratory blackbody radiator. What are the advan-
tages and disadvantages of such a surface in this application? [2]
96 Chapter 3

3.17 Implement Equation (3.23) in a computer program and determine


how many summation terms are required to achieve an accuracy
of better than 1% from the ideal solution for the following three
spectral bands {1–2 µm, 3–5 µm, 10–12 µm} and a temperature
of 700 K. Comment on the observed differences in the required
number of terms. [6]
3.18 Select and plot spectral emissivity data from the MODIS UCSB
Emissivity Library 53 for at least two materials in each of the fol-
lowing four classes: (1) water, ice, and snow; (2) soils and minerals;
(3) vegetation; and (4) synthetic materials. Comment on the sim-
ilarities and differences. Is there a significant difference between
emissivity values for synthetic and natural materials? [6]
3.19 Define Snell’s law and show mathematically why the light ray
‘bends’ on an interface between different refractive indices. Ex-
plain what will happen in a medium with a gradual change in
refractive index. See Section 5.5.8 for a hint. [4]
Chapter 4
Optical Media

The farther reason looks, the greater is the haze in which it loses itself.
Johann Georg Hamann

4.1 Overview

In the context of the source–medium–sensor system model, the medium


is everything between the source and the sensor. The optical medium
affects the radiance field by flux attenuation, flux amplification (in the
base of lasers), flux increase (path radiance) and refractive wave distortions
(e.g., turbulence). The medium effects can be either static and temporally
constant or temporally and spatially dynamic, such as in turbulent flow. In
keeping with the theme of the book, this chapter investigates the effects of
the atmosphere as a component in the system rather than on the physical
processes in the atmosphere — these are covered elsewhere. 1–6
The atmospheric index of refraction varies with pressure and tem-
perature. The atmospheric air movements result in eddy currents with
varying temperature and pressure, resulting in cells of varying indices of
refraction. These variations cause a number of different effects, depending
on the magnitude of the variation, the physical area of the variation, the
nature of the optical flux, and so forth. Some of the effects that can occur
are beam steering, arrival-angle variations, scintillation (variation in sig-
nal strength), and visual mirage effects. This chapter considers only static
media; turbulence is well documented elsewhere. 7–11
Flux transfer through a medium is modeled with the radiative trans-
fer equation (RTE). 8,12,13 The full RTE is complex, and considerable ‘engi-
neering’ simplification is made here to convey concepts and principles. A
comprehensive discussion is beyond the scope of this book. This chapter
initially considers a trivially simple RTE for homogenous media and then
adds more complexity to account for path radiance and inhomogeneous
media. The atmosphere 2,8 is an important component in most optical and

97
98 Chapter 4

infrared systems and is briefly reviewed.

4.2 Optical Mediums

4.2.1 Lossy mediums

In its most-simple form, a lossy medium can be modeled by a spectrally


varying magnitude τ, called the ‘transmittance.’ Transmittance is the ratio
of the source energy reaching the receiver with the medium present to the
energy reaching the receiver with no medium present. In general, medium
transmittance is spectrally selective and can only be meaningfully defined
and determined at a specific wavelength. It is implied throughout this
section that all variables are spectrally varying.
The losses in the medium are mainly attributable to scattering and
absorption processes. The degree of attenuation by a medium is described
by the attenuation coefficient γ with units [m−1 ]. In the derivation that fol-
lows it is assumed that the absorbed or scattered power is not re-radiated
in the optical path. Also, in most cases the fractional change in radiance is
linearly proportional to distance in the medium. 14
Consider an arbitrary medium contained between x = 0 and x = R.
The fractional change in radiance L along a thin layer of the medium dx is
given by
dL
= −γdx. (4.1)
L
Note that γ generally varies along the path and can be described as a
function of the path variable x:
dL
= −γ( x)dx. (4.2)
L

Integrating this equation along the path between 0 and R finds


 L( R)  R
dL
= − γ( x)dx
L ( 0) L 0
  R 
L( R)
= exp − γ( x)dx . (4.3)
L ( 0) 0

If the path through the medium is homogeneous, γ( x) is constant,


leading to Bouguer’s law:
L( R)
= τ ( R) = e−γR , (4.4)
L ( 0)
Optical Media 99

where γ is the medium’s attenuation coefficient or extinction coefficient


with units [m−1 ], and R is the distance through the medium in [m]. The
attenuation coefficient is related to the imaginary component of the com-
plex index of refraction (see Sections 3.4.3 and 5.5.8).
If the medium along the path is not homogeneous, and the variation
of γ along the path is known, the attenuation coefficient can be written as
a normalizing constant γo multiplied by a range-dependent function f ( x),
γ( x) = γo f ( x). The function f ( x) represents the profile of γ along the
optical path. The transmittance of the path in an inhomogeneous medium
is then
  R 
τ ( R) = exp −γo f ( x)dx
0
− γo R
= e . (4.5)

The integral
 R
R= f ( x)dx (4.6)
0

represents the horizontal distance in a homogeneous medium (with γ =


γo ) for which the attenuation is the same as the actual path of length R
through the inhomogeneous path [with γ = γ( x)]. R is also known as the
equivalent path length, given γo .
The attenuation usually comprises two independent processes: scat-
tering and absorption. Particles and molecules in the medium may scatter
the photons, changing the photon’s direction, and thereby ‘removing’ the
photon from the flux. Photons may also be absorbed by particles and
molecules, raising the particles’ energy level but also removing the photon
from the flux. The two processes are modeled by two attenuation coeffi-
cients,
γ = σ + α, (4.7)
where σ denotes the scattering attenuation coefficient in [m−1 ], and α de-
notes the absorption attenuation coefficient in [m−1 ].

4.2.2 Path radiance

An optical medium can add radiance to the optical path in addition to


the source radiance already present along the path. 1 Path radiance is the
phenomenon whereby the path injects optical flux along the line of sight.
Two common examples of path radiance are the bright appearance of fog
when irradiated by the sun from above and the blue appearance of the sky
100 Chapter 4

Lb0 LbR
Lt0 LtR

Lth x Lth
L0 σx Lσ LR

x dx
x=0 Lx L x − γ L x dx x=R
Background Object L x +dx Observer
Lσ L( x +dx )−dx

Figure 4.1 Definition of path radiance geometry variables.

on a cloudless day. In both cases sunlight is scattered into the sensor’s field
of view, adding flux along the line of sight. The following derivation is
based on the two-flux-Kubelka–Munk theory 15,16 developed for the optical
properties of paint, but the principles apply to any medium.
Path radiance occurs from the medium’s thermal self-exitance or from
flux from another source that the medium scatters into the radiance field.
The total flux is the sum of the source flux and all path radiance contribu-
tions.
The approach set out by Duntley 17 and others 18–20 is simplified by
omitting some scattering sources, but it is extended here to include thermal
self-exitance along the path. All of the variables defined here are strongly
dependent upon wavelength even though it is not indicated as such. Only
the conceptual model development and the results are shown here; for a
detailed mathematical analysis see the sources.
Consider the path geometry shown in Figure 4.1. The ‘line of sight’
is defined as the direction from L R toward the source L0 . The optical field
has a radiance L0 at the source, and after propagating over a distance R
through the medium, it has a radiance L R . At distance x along the path
the field has a radiance L x and after propagating a further distance dx,
the radiance is L x +dx . Source radiance passing through dx will diminish
because of attenuation in the medium, as discussed in Section 4.2.1.
The radiance L x will increase due to the flux scattered into the line of
sight by an amount σx Lσ , where σx is the scattering coefficient in [m−1 ],
and Lσ is the external source radiance. The radiance L x will increase fur-
thermore by the thermal exitance of the medium by an amount x Lth ,
where x is the medium emission coefficient with units [m−1 ], and Lth
Optical Media 101

is the thermal radiance of a blackbody at the same temperature as the


medium. Note that this emission coefficient x gives rise to path radiance
over the distance dx in the medium, but it is not the same as emissivity as
discussed in Section 3.2.
The medium attenuation and path radiance terms can be combined as
a system of two interdependent differential equations for the forward and
backward directions (a simplified version of the Duntley Equations 17 ):
dL x +dx
= −γx L x+dx + σx Lσ + x Lth (4.8)
dx
and
 
dL( x +dx )−dx
− = −γx L x + σx Lσ + x Lth . (4.9)
dx

Assume that each variable γx , σx , and x can be written as a normal-


izing constant multiplied by a common range dependent function f ( x), in
the form γx = γ0 f ( x), as per Equation (4.6). Now solve the simultaneous
differential equations comprising Equations (4.8) and (4.9) to obtain the
RTE for flux transfer together with path radiance:
σ0 Lσ + 0 Lth
L R = L 0 e− γ0 R + ( 1 − e − γ0 R ) . (4.10)
γ0

The solution in Equation (4.10) comprises two parts, the irradiance


at the beginning of the path multiplied by the atmospheric transmittance
along the path (as found in Section 4.2.1) plus the path radiance term. The
path radiance contribution comprises the flux scattered into the line of
sight and the self-emission along the line of sight. Recall from Section 2.3.4
that 1 = α + τ + ρ; therefore, for any medium with no reflection (a rea-
sonable assumption for a mainly transmissive medium), α = 1 − τ. The
term 1 − exp(−γo R) now appears as an ‘emissivity’ factor multiplying the
medium radiance.
By applying Kirchhoff’s law α = , and on condition that the scat-
tering coefficient for attenuation and path radiance share the same value,
Equation (4.7) can be written γ0 = σ0 + 0 . Equation (4.10) then becomes
σ0 Lσ + 0 Lth
L R = L0 e−(σ0 +α0 ) R + (1 − e−(σ0 +α0 ) R ). (4.11)
σ0 + 0

If now σ0 = 0, i.e., for a clear atmosphere in the far-infrared spectral


region,

L R = L0 e−α0 R + Lth (1 − e−α0 R ), (4.12)


102 Chapter 4

and it is evident that path radiance can be approximated by the medium


radiance (as for any opaque surface radiator) multiplied by an atmospheric
‘emissivity’ (equal to unity minus the medium transmittance). This inter-
pretation of ‘emissivity’ and path radiance is appropriate for the thermally
radiated flux from the medium: the concepts of emissivity and radiance
can be understood in physical terms. Path radiance caused by scattering is
not comprehensible in terms of true emissivity and (thermal) radiance; in
this case it is merely a model, not a physical reality. The conditions under
which this simplification is valid are investigated in more detail later in
this chapter.

4.2.3 General law of contrast reduction

Medium attenuation and path radiance reduce the apparent contrast of an


object against a background radiance. When observing mountains on the
horizon, one observes different mountains to have varying contrast. On a
clear day, the mountain stands out clearly, and on a hazy day, the moun-
tain disappears in the haze. This section investigates general equations
describing the effects of a medium on the apparent contrast of an object in
an image.
There are several different definitions of contrast. For the purpose of
this section, we follow the definition 17 used for visual observations. The
contrast at the object ( x = 0) and the apparent contrast at the observer
( x = R) are given by (from the definition of contrast)
Lt0 − Lb0
C0 = (4.13)
Lb0
and
LtR − LbR
CR = , (4.14)
LbR
where the radiance terms are defined in Figure 4.1; where the footscripts
denote: t as the object of interest (target), b as the background behind the
object, 0 as the range at the object, and R as the range at the observer.
Furthermore, from the definition of transmittance,
LtR − LbR = ( Lt0 − Lb0 )e−γR . (4.15)
Combining the last three equations, the general law of contrast reduction
is obtained as
CR L
τc = = ( b0 )e−γR , (4.16)
C0 LbR
where Lb0 is the background radiance at the observer, and LbR is the back-
ground radiance at the object, both along the line of sight from the observer
Optical Media 103

Transmittance Optical depth Emissivity


e-γR γR (1 - e-γR)
1.0 2
R γ=2 2
0.8 1.2
1.2
0.6 0.5
0.5
0.4 1.2
0.2 0.5
0.1 γ = 0.1
0 γ = 0.1
–0.2
–0.4
–0.6
–0.8
–1.0
0 1 0 4 0 1
Figure 4.2 Radiance from a cylindrical hot gas plume for different values of attenuation
coefficient.

to the object. The contrast transmittance is the fraction with which the
atmosphere reduces the inherent (close-range) contrast in the scene. Equa-
tion (4.16) has a form similar to Bouguer’s law but with a background
radiance scaling modifier. Contrast transmittance in the atmosphere is
considered in more detail in Section 4.6.9.

4.2.4 Optical thickness

The optical thickness β = γR of the medium is defined as the exponent in


Bouguer’s law. If the optical thickness is small, the transmittance is high
and the medium is clear. On the other hand, if β is high, transmittance
is low and the medium is opaque. It follows that optical thickness is a
measure of the opacity of the medium. Optical thickness is therefore an
indicator of the ‘emissivity’ term (1 − e−γR ), as derived in the previous
section.

4.2.5 Gas radiator sources

Optical thickness provides another important insight. Section 3.2.1 states


that for gasses ρ = 0, and Section 4.2.2 shows that  = 1 − τ. Hence, an
optically thick gas medium with low transmittance has a high emissivity.
Consider the (path) radiance from a cylindrical, hot gas plume, as
shown in Figure 4.2. The cylinder has unity radius and is filled with a
gaseous medium with attenuation coefficient γ of 0.1, 0.5, 1.2, and 2. Hor-
izontal sections through the plume have length R, which depends on the
height of the section.
104 Chapter 4

For γ = 0.1, the plume has a transmittance of 0.82 through the center,
low optical depth, and hence a low emissivity (0.18). The total volume of
this plume contributes to the radiance field from the plume. The plume
acts as a volume radiator. This also means that the entire plume volume
loses heat by thermal radiation.
At the other extreme, for γ = 2, the plume has a transmittance of
0.018 through the center, high optical depth, and hence a high emissivity
(0.982). It follows that the exitance from this medium emanates from the
outer layers of the medium. A volume of highly attenuating gas therefore
acts as a surface radiator. The plume loses heat only from the surface; the
inner volume retains its heat better than for a plume with low γ.
Conversely, an optically thin medium has high transmittance and low
emissivity. Because of the low emissivity (and good transmittance), the
radiation emanates from the whole volume of the medium — a volume
radiator.
The emissivity of a radiating gas is a spectral variable, as shown in
Figure 3.7. A CO2 gas plume has high emissivity (large γ) at 4.35 µm, but
almost zero emissivity at 3.5 µm. The plume can therefore be a surface
radiator (opaque as brick wall) at 4.35 µm but it is fully transparent at
3.5 µm.

4.3 Inhomogeneous Media and Discrete Ordinates

Most media are not homogeneous, f ( x) = c, and Equations (4.8) and (4.9)
must be solved by integration along the optical path. Because the profile
f ( x) is almost never an analytical function, the integral cannot be solved
analytically. Practical solutions to the RTE are all based on discretiza-
tion 4,21 of the continuous function f ( x) into discrete parts f d [ Xdi ]. Each Xd
is a discrete section along the path x, such that the error f ( x) − f d [ Xdi ] is
sufficiently small for all values of x and corresponding values of Xdi . By
decreasing the discrete interval Xd toward zero, the difference between the
continuous function and the discrete approximation approaches zero (at
least sufficiently so for engineering purposes!).
Practical computational considerations limit the number of discrete
intervals and hence determine the coarseness of the approximation. In
some cases the discrete interval is not necessarily a constant value. One
such example is the modeling of atmospheric vertical profiles as 36 discrete
layers (see Section 4.7). A reasonable modeling approach would be to
adapt the sampling interval Xd to the relative magnitude of the profile f .
For path regions with low values of f , the interval Xd could be modified
Optical Media 105

L5
α5 σ5 Xd5
L4

α4 σ4 Xd4
L3
L2 α3 σ3 Xd3
L1 α2 σ2 Xd2
α1 σ1 Xd1

Figure 4.3 Discrete vertical strata.

such that the product Xdi f d [ Xdi ] is approximately constant for all values
of i. In this strategy, each discrete path section contributes equally to the
integral along the path.
The discrete model intervals Xdi may be larger than the path length
itself, and the path’s two end points may end up in between Xdi bound-
aries (Figure 4.3). The endpoint section contributions are added pro-rata,
according to the length in each endpoint.
The geometric shape of the discrete intervals depends on the prob-
lem at hand. The earth’s atmosphere on a global scale is modeled as a
set of concentric shells, centered on the earth’s center. On a smaller, local
scale, the atmosphere can be modeled as a set of vertical strata. An aircraft
plume can be modeled as a stacked set of short concentric cylinders. An
arbitrary volumetric radiator can be modeled as a set of voxels, stacked
like a three-dimensional chessboard. The important consideration is that
within each of these individual volumes, the medium is considered uni-
form and homogeneous. The solution of the RTE within these discretized
models require numerical computation.

4.4 Effective Transmittance

From Equations (2.26) and (4.4) (and looking ahead to Section 7.2.2) the
irradiance at the sensor from a distant source is given by an equation of
the form as shown in Equation (6.19),

A0 ∞
E = λ Lλ ( Ts )τaλ ( R)Sλ dλ, (4.17)
R2 0
where A0 is the area of the receiver, R is the range from the source to the
receiver, λ is the source spectral emissivity, Lλ ( Ts ) is the Planck law for
106 Chapter 4

an object at temperature Ts , τaλ ( R) is the medium (atmosphere) spectral


transmittance at the range R, Sλ is the sensor spectral response. The inte-
gral must be recalculated for each source-to-object distance because the at-
mospheric transmittance varies with distance. With a few simplifications,
it is possible to re-write the irradiance equation in terms of the ‘effective
transmittance’ as follows:
 ∞
A0 τa ( R)
E= λ Lλ ( Ts )Sλ dλ, (4.18)
R2 0

where the integral is independent of range and is only calculated once.


The effective transmittance τa ( R) is then given by
∞
λ Lλ ( Ts )τaλ ( R)Sλ dλ
τa ( R) = 0  ∞ , (4.19)
0
λ Lλ ( Ts )Sλ dλ

a scalar number instead of a spectral quantity. The effective transmittance


is spectrally weighted by the source and sensor spectral responses, and
therefore accounts for the effect of the system’s spectral quantities. The
designer may be tempted to calculate the effective transmittance only once
and thereafter use only the effective value, but caveat emptor!
This definition of effective transmittance is the only accurate way to
describe average or effective transmittance. The Modtran™ tape6 ‘aver-
age’ transmittance should not be used because it does not include the sensor
or the source spectral properties. Modtran™ provides separate facilities to
calculate a weighted transmittance.
Figure 4.4 shows the atmospheric spectral transmittance for several
path lengths, a sensor response, and the normalized spectral radiance of
several sources. The sources include a CO2 gas radiator (aircraft plume)
and thermal radiators at a range of temperatures. Equation (4.19) was
used to determine the effective transmittance for the different sources. The
sensor response shown in Figure 4.4 was used. The effective transmittance
values so determined are shown in the bottom graph of Figure 4.4.
Note the severe attenuation for the CO2 gas plume, even over short
ranges. This stems from the fact that the CO2 spectral exitance is severely
attenuated by the cold CO2 in the atmosphere. This effect is clearly visible
in the spectral plot in Figure 4.4.
Less dramatic, but still relevant, is the variation of effective transmit-
tance with source temperature. If the effective transmittance for a 2300-K
source is used for an object at 330 K, a large range error arises for a given
transmittance, especially at longer ranges. Consider the line for 0.18 effec-
tive transmittance in Figure 4.4, which gives 10 km for a 2300-K source,
Optical Media 107

(a) Atmospheric transmittance, sensor response, and plume radiance


1
Atmosphere Plume emissivity
Relative magnitude

1 km 3 km

5
7

9 km
0
3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5
(b) Atmospheric transmittance, sensor response, and source radiance
1
2300 K
Relative magnitude

Blackbody
Blackbody 330 K
1000 K

Plume
Sensor emissivity
response

0
3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5
Wavelength [mm]

(c) 3-5-mm effective transmittance: flare, aircraft plume, tailpipe, and fuselage
1

exp(-0.3R)
Effective transmittance

Unity weighted
0.5
MTV flare 2300 K

Fuselage 330 K Tailpipe 1000 K

Plume Same effective transmittance


for widely different path lengths
0
0 1 2 3 4 5 6 7 8 9 10
Range [km]

Figure 4.4 Effective atmospheric transmittance for various sources.


108 Chapter 4

but only 6.5 km for a 330-K source. Hence, effective transmittance must be
calculated for the correct source temperature.
It is often proposed that the effective transmittance curves versus dis-
tance can be approximated by an exponential curve τa = e−αR . This formu-
lation is Bouguer’s law, as derived in Equation (4.4), at a single wavelength.
When applied to broadband sensors, this approximation yields a poor fit
for grey body radiation observed through a spectrally selective medium.
Figure 4.4 also shows that Bouguer’s law is totally inadequate in approxi-
mating the effective transmittance of a spectrally selective source through
a spectrally selective medium.

4.5 Transmittance as Function of Range

It is sometimes necessary to scale transmittance from one path length to


another transmittance at another path length. Equation (4.4) shows that
for a homogeneous medium the spectral transmittance is given by
τ ( R) = e−γR , (4.20)
where R is the path length in [m], and γ is the attenuation coefficient with
units of [m−1 ]. If the transmittance at range R1 is known, the transmit-
tance at range R2 can be determined by realizing that the transmittance is
characterized only by the attenuation coefficient
γ = − loge [τ ( R1 )]/R1 , (4.21)
and hence
τ ( R2 ) = e−γR2
= eloge [τ ( R1 )]R2 /R1 . (4.22)
The analysis in Section 4.4 indicates that scaling of effective transmittance
with range strongly depends on the spectral radiance of the source. Hence,
when scaling transmittance versus range, spectral transmittance should be
used rather than wideband effective transmittance.

4.6 The Atmosphere as Medium

4.6.1 Atmospheric composition and attenuation

The atmosphere is a highly variable, complex, and dynamic medium. 1–3


The atmospheric density, pressure, and temperature vary with altitude,
shown in Figure 4.5. The attenuation of electromagnetic radiation through
the atmosphere is affected by absorption and scattering from molecules
Optical Media 109

100 99.99997% mass is below 100 km Total mass 5.14×1018 kg, with variation
Aurora Borealis
of 1.2 to 1.5×1015 kg due to H2O
85
77 Mesopause Ionosphere D layer
71 Mesosphere

60 Meteors

51 Stratospause
47
40 Very little aerosol Highest balloon
above 30 40 km
Altitude [km]

& human free fall X 43A


Stratosphere
32 Slowly setting
cosmic dust Temperature
25 Ozone layer
Volcanic and photo-
20 chemical aerosol
90% mass is Stratospheric SR 71
15 below 16 km Commercial flight
Cumulonimbus
12 Cirrus Cirrostratus
10 Tropospause
Pressure
7 Troposphere Cirrocumulus Altocumulus Altostratus
50% mass is Nimbostratus
5 below 5.6 km Density
4 Aerosol decreases Stratocumulus
3 exponentially with height
2 Cumulus
1 Stratus
Boundary layer: variable aerosol
and water vapor determined by
local meteorological conditions Fog
0
0 0.15 0.3 0.45 0.6 0.75 0.9 1.05 1.2 Density [kg/m3]
0 150 300 450 600 750 900 1050 Pressure [mB] International & US Standard
Atmosphere profiles
90 75 60 45 30 15 0 15 Temperature [°C] C J Willers

Figure 4.5 Atmospheric definitions and the standard atmospheric profiles.


110 Chapter 4

Scattering by particles and aerosols


Probability of the scattering angle Flux can be scattered out of the optical path.
depends on the particle radius (a) Photons can experience single or multiple scattering.
relative to the flux wavelength (λ). Scattering is a weak function of wavelength.

Mie forward scattering 2pa < λ τ


Lost scattered flux

Single scattering λ

Rayleigh omni scattering 2pa > λ


Multiple scattering
Particle sizes
not all the same.

Molecular absorption
Absorption when energy matches Flux can be absorbed out of the optical path.
the energy difference between Absorption is a very strong function of wavelength.
ground and excited states. Small
energy variations result from hν not matching, O
flux not absorbed
additional vibrational energy levels. H H

Vibrational movement hν matching, flux absorbed


in H2O molecule is t
O H

as shown here O H

(simplified): H H λ
Path radiance
Solar flux
Molecular emission and scattering
add flux to the path. Path radiance
emanates from sources other than Ambient flux Flux scattered into path
the primary source. Path radiance
flux could exceed the primary
source flux. Molecular emission is O H
Molecular emission into path
the inverse of molecular absorption. H
H
O
H

Figure 4.6 Atmospheric medium effects.

and particles (such as haze, dust, fog, or cloud droplets) suspended in the
air. Figure 4.6 summarizes the medium effects of the atmosphere.
Scattering and absorption by aerosol particles is a prominent factor in
the lower few kilometers in altitude, near the earth’s surface. The molec-
ular constituents in the atmosphere strongly affect attenuation and scat-
tering in the visual spectral range. Atmospheric aerosol particles in the
atmosphere vary greatly in their concentration, size, and composition, and
consequently in their effects on visual and infrared radiation. The aerosols
and molecules are not uniformly distributed along the optical path. Air
density, and hence particle and aerosol density, decreases with altitude and
even varies along paths at constant altitude. Some species of molecules
may vary in concentration, depending on local conditions and air–mass
history.
In terms of the model developed in Section 4.2.1, the atmospheric
attenuation coefficient γ comprises two components γ = α + σ, where α
is the absorption coefficient, and σ is the scattering coefficient, both with
Optical Media 111

1.00
Excited state

Transmittance
Vibrational levels 0.75
Energy

Energy level difference


hν e 0.50
Ground state
Lines’ spread
determined
0.25 by differences
in vibrational
energy levels Center determined by the
Vibrational levels difference in energy levels
0.00
3300 3400 3500 3600 3700 3800 3900 4000 4100 4200
Internuclear distance
Wavenumber [cm-1]
(a) (b)
Figure 4.7 H2 O as a molecular absorber: (a) molecular energy levels and (b) atmospheric
associated spectral transmittance over a 1-km path, near 3750 cm−1 .

units [m−1 ]. α is a function of the molecular composition and density of


the atmosphere, and varies strongly with wavelength. σ is a function of the
aerosol and particulate contents of the atmosphere, and varies less strongly
with wavelength.

4.6.2 Atmospheric molecular absorption

The atmosphere’s main molecular constituents include molecular nitrogen


(N2 , 78%), molecular oxygen (O2 , 21%), argon (Ar, 0.93%), molecular car-
bon dioxide (CO2 , 0.04%), and water vapor (H2 O). Water vapor constitutes
approximately 0.4% of the ‘total’ atmosphere but can range from 1% to
4% near the earth’s surface. Molecular absorption in the infrared spectral
band is however dominated by species with very low concentrations but
very active vibration-rotation bands, such as water vapor, carbon dioxide,
ozone, and nitrous oxide. 22
When a molecule absorbs a photon, an electron is excited from a lower
energy level to a higher energy level. The energy levels are discrete quan-
tum levels, but each quantum level has small variations due to molecule
vibration, as shown in Figure 4.7(a). The diatomic molecules (N2 , O2 , CO)
have one vibrational mode, known as vibrational stretch. These molecules
lack a permanent dipole moment and are unable to sustain oscillating
momentum, with less impact on transmittance and radiation. The tri-
atomic molecules (CO2 , N2 O, H2 O, O3 ) have multiple vibrational modes,
corresponding to low-energy level variations, within each quantum en-
ergy level. When a photon interacts with a molecule, the photon will
be absorbed if the photon energy (hc/λ) matches one of the molecule’s
combined quantum and vibrational energy levels. Different molecules are
in different vibrational energy states and hence absorb slightly different
112 Chapter 4

wavelength photons, as shown in Figure 4.7(b).


Figure 4.8 shows the spectral transmittance of the atmosphere, but in-
dividually for different molecular species. Note how the different groups
of lines correspond to different molecular quantum energy gaps. Within
each group there are a multitude of minutely narrow lines due to vibra-
tional differences between molecules.

4.6.3 Atmospheric aerosols and scattering

Aerosols in the boundary layer, up to 1–2 km in altitude, have greater vari-


ability compared to higher altitudes. These aerosols consist of a variety
of natural and synthetic chemical compounds. Aerosol can be dry matter,
liquid droplets, or a mixture of the two. Particle size can range from a few
hundred nanometers to tens of micrometers. Aerosols are created by nat-
ural causes such as wind, volcanos, fog formation, fires, and even human
and animal movement. Land aerosols comprise dust and organic particles
from vegetation. Aerosols can also form by photochemical processes un-
der solar irradiation. Heavier dust aerosols created mechanically, such as
explosions, wind storms, or vehicle movement, typically contain a wide
range of particle sizes. The composition of the dust aerosols depends on
the means of activation, the properties of the soil, and the water content
of the soil. The maritime aerosols are primarily salt particles and water
spray. Anthropogenic causes include smoke and pollution aerosols from
urban and industrial areas. Manmade aerosols have a different chemical
composition compared to natural aerosols. Aerosols are mostly created
near the earth’s surface and are carried into the air by thermal and wind
movement. Heavy aerosol particles (larger than a few µm) tend to drift
to earth over time, whereas the lighter aerosols stay suspended in the air
indefinitely or until settled down by rain. A heavy particle with diameter
of 8 µm settles by gravitation 23 at an average velocity of 2 mm/s, but a
0.8-µm particle settles at a velocity of 0.02 mm/s.
Above the boundary layer, in the troposphere, the aerosols are less
dependent on the local conditions near the earth’s surface. At these alti-
tudes, the different kinds of aerosol include smaller particles formed from
gaseous components, undergoing processes of coagulation and agglom-
eration. There are also larger particles originating in the boundary layer,
swept up by air currents. In the stratosphere (10–30 km), the aerosol dis-
tribution is globally more or less uniform. At these altitudes the aerosols
are mostly sulfate particles formed by photochemical reactions. Volcanic
eruptions can inject large volumes of dust and SO2 aerosol, which are
transported globally over very wide areas by stratospheric circulation. The
Optical Media 113

MODTRAN Tropical transmittance over 5-km path length at sea level


1
Molecular Rayleigh scattering
0
1
Aerosol Mie scattering
0.72

Rural aerosol, 23-km visibility


0.82
0.94

0
1.1

2.7
3.2
1.87

6.27
1.38

16
1
H2O molecular
27 °C, 75% RH
0
1
H2O continuum
27 °C, 75% RH
0
1
CO2

15
394 ppmv
1.4
1.6
2.0

2.7

4.3

9.4
9.5
0
1
CH4
1.66

2.37
3.26
3.31
3.53
3.83
2.2
2.3

7.6

0
1
O3 &
O2
0.63
0.69

1.06
1.27

1.58

4.72

9.01
9.59

14.2
0.76

3.3

0
1
N2O
2.87
2.97

4.06
3.9

4.5

7.9

0
1
All effects combined

0
0.2 0.3 0.4 0.5 0.7 1 2 3 4 5 6 7 8 10 15
Wavelength [mm]

Figure 4.8 Transmittance of various aerosols and molecules in the atmosphere.


114 Chapter 4

Thermal IR

Microwave
Ultraviolet

Near IR
1 cm Visible al op
tics x = 2pr/l Hail
tric 000
1 mm Ge ome x=2
r is particle radius
Raindrops
Particle radius

100 mm ng Drizzle
sca tteri
10 mm Mie Cloud droplets
.2 g
1 mm x = 0 catterin Dust
Smoke
g h s
0.1 mm lei .002 Haze
Ray x=0 terin
g
10 nm b l e scat Aitken nuclei
ligi
1 nm Neg
Air molecules
0.1 mm 1 mm 10 mm 100 mm 1 mm 1 cm 10 cm
Wavelength

Figure 4.9 Aerosol scattering modes versus wavelength and particle size (used with per-
mission 1 ).

heavier dust particles may drift down and settle out of the atmosphere in
a few months, but photochemically formed sulphuric acid may remain for
up to two years. Only a very small portion of the total aerosol content of
the atmosphere exists above 30 km. 22
Aerosol composition is characterized with a statistical particle density
versus a lognormal size distribution. The statistical modal value of radius
is an indication of the aerosol ‘size;’ note, however, that the size has a
statistical distribution with the modal size occurring most often, but there
is a wide variation in particle sizes present. Indeed, most aerosols are
characterized by several modal distributions, not just a single distribution.
Hence, the concept of aerosol particle size should be understood to mean
the most commonly occurring size in the distribution.
Scattering is a wideband phenomenon, compared to the narrow ab-
sorption lines of molecular absorption. The energy is momentarily ab-
sorbed and immediately re-radiated as if from a point source. The at-
mospheric effects on optical and infrared flux depend on the size of the
particle compared to the wavelength of the light. Simplifying approxima-
tions are made to model the scattering effect. If 2πr/λ  1, where r is
the particle radius, the Rayleigh approximation applies, and if 2πr ≥ λ,
the Mie (sometimes called the Lorentz–Mie) approximation applies. Fig-
ure 4.9 shows the aerosol scattering modes at various wavelengths and
particle sizes. Mie scattering is an approximation of scattering theory, and
Rayleigh is a further special case of Mie scattering. Figure 4.10 shows scat-
Optical Media 115

2pr/λ = 0.063
2pr/λ = 1.88
2pr/λ = 5.03

2pr/λ = 0.63
2pr/λ = 3.77
2pr/λ = 6.28

2pr/λ = 3.77
2pr/λ = 1.26

log scale: Not to the same scale


max = 1
min = 10-3

Figure 4.10 Scattering intensity profiles for various values of 2πr/λ in unpolarized sunlight.

tering intensity profiles for various values of 2πr/λ for a water droplet,
starting with Rayleigh scattering and progressing to Mie scattering. Scat-
tering also depends on the incident light polarization and on the geomet-
rical shape of the particle. The discussion given here pertains only to un-
polarized light and near-spherical and isotropic particles. Comprehensive
coverage is available in Liou’s book. 4
Rayleigh scattering occurs when electromagnetic energy interacts with
aerosol (molecules) of physical size significantly smaller than the wave-
length of the energy field. This process is not a molecular absorption pro-
cess but rather an interaction at energy levels other than molecular absorp-
tion. The resultant scattering attenuation coefficient γ is approximately
proportional to λ−4 , affecting mainly propagation at shorter wavelengths
(ultraviolet, visual, and near-infrared). Rayleigh scattering leads to the
‘blue sky’ observed visually. In the visual and ultraviolet bands, the blue-
sky spectral radiance very roughly mimics a low-emissivity 10,000-K ther-
mal radiator. Rayleigh scattering by atmospheric molecules has little effect
at wavelengths longer than 3 µm. The phase function 4,22 gives the proba-
bility distribution for the scattered light, so that P(θ )dΩ is the fraction of
the scattered radiation that enters a solid angle dΩ about the scattering an-
gle θ. The phase function for Rayleigh scattering (for unpolarized light at
visual wavelengths) can be approximated by P(θ ) = k(1 + cos2 θ ), which
has a component of omnidirectional scatter, with peaks in the forward and
backward directions. Figure 4.8 shows Rayleigh molecular scattering over
a 5-km path length.
116 Chapter 4

Mie scattering occurs when electromagnetic energy interacts with par-


ticles of the same size or larger than the wavelength of the energy field,
2πr/λ ≥ 1, where r is the radius if the particle. Mie scatter has a large
forward scatter along the direction of the incident flux. The shape of the
phase function depends significantly on the physical and chemical char-
acteristics of the aerosol. The resultant attenuation coefficient γ has some
spectral variation but not as strong as Rayleigh scattering. Depending on
the size of the particles and the flux wavelength, Mie scattering may have
significant effects at wavelengths up to or even beyond 10 µm. For most
naturally occurring low-density aerosols and artificial aerosols, the size
distribution is such (average diameter less than 1 µm) that there is signifi-
cant scattering in the visual region with minimal scattering in the infrared.
In less-dense mediums, a photon may only experience a single scat-
tering event (single scatter) along the optical path. At higher aerosol
densities, a single photon may experience many scattering events (mul-
tiple scatter). Clouds illuminated by sunlight have an intense white color;
white because all colors are scattered equally (unlike Rayleigh’s λ−4 spec-
tral scattering), and opaque because of multiple-scattering of the sunlight.
Figure 4.8 shows the transmittance along a 5-km path, with a 23-km visi-
bility in a Modtran™ Urban aerosol (see Section 4.7.2 for a description of
Modtran™). The aerosol has an attenuating effect deep into the infrared
spectral range.

4.6.4 Atmospheric transmittance windows

The atmosphere is transparent in some spectral bands; these bands are


known as atmospheric windows. The transmittance in the atmospheric
windows can be low, whereas in the spectral regions between the windows
the transmittance is zero, truly opaque. The atmospheric windows are fre-
quently named with the spectral band acronyms defined in Section 3.1.6.
Toward the shorter wavelengths, atmospheric absorption bands re-
duce the target flux by attenuation but do not contribute path thermal
radiance flux. In the longer-wavelength bands, the atmosphere’s effect in
the absorption bands is twofold: a reduction in target flux as well as a
strong thermal-path radiance contribution. Figure 4.11 shows how the ra-
diance of a 300-K source contributes to the path radiance in the absorption
bands beyond 5 µm. Keep in mind that the emissivity is (1 − τ ) in the
absorption bands! The designers of MWIR and LWIR systems therefore
endeavor to limit sensor sensitivity in the absorption bands.
The visual spectral band, defined by human vision, is largely unaf-
fected by molecular absorption. Atmospheric aerosol content is a limiting
Optical Media 117

1 Aerosol transmission 27 °C, 75% relative humidity


Molecular 23 km visibility rural aerosol
transmission 5 km path length, sea level
Transmittance

300 K
thermal
source
0.5

0
Visible Near IR SWIR MWIR LWIR
0.5 0.75 1 1.5 2 2.5 3 5 8 12 20
Wavelength [mm]

Figure 4.11 Atmospheric transmittance and atmospheric windows.

factor in this spectral band, ranging from Rayleigh scattering by molecules


under clear-sky conditions to severe attenuation by heavy aerosol (cloud,
dust) under poor visibility conditions.
The NIR spectral band has several narrow absorption spectral bands
in the window. These absorption bands have no effect other than attenuat-
ing the target flux. Sensors operating in this band could use one or more
of the narrow atmospheric windows.
The MWIR spectral band was traditionally used for the observation
of hot targets (e.g., aircraft signatures). Recent detector developments pro-
vided staring array sensors with good performance against cooler ground
targets as well. Sensor performance in this band is less sensitive to humid-
ity, with the result that these sensors are used in humid/tropical areas. The
sensors are not very sensitive in a low-ambient-temperature environment,
with the result that these sensors are not very effective at the further north-
ern and southern latitudes. The CO2 absorption band at 4.3 µm severely
attenuates flux but can at the same time be used to detect hot gas CO2
emissions. Sensor design sometimes uses a narrower portion of the band
and not the full width of the atmospheric window.
The LWIR spectral band is commonly used for the observation of
ground-based targets. Ground targets have temperatures around 300 K,
which result in peak infrared exitance and strong signals in this atmo-
spheric window. Sensor performance in this band is sensitive to humidity,
with the result that these sensors are mainly used under cooler and drier
climatic conditions and less often in high humidity climates. Diffusely
reflected sunlight from high-emissivity (low reflectivity) surfaces has no
appreciable effect in the 8–12-µm spectral band; the observable signature
118 Chapter 4

stems mainly from the object’s thermal exitance. Sunlight reflection from
specular surfaces (known as glint) may produce a significant signal in the
LWIR band. 24 See also Sections 8.1 and 8.11 and Figure 8.3 for a discussion
of the effect of sunlight and sunglint on optical signatures.

4.6.5 Atmospheric path radiance

Atmospheric path radiance has a major influence on sensor performance


and should be considered alongside atmospheric transmittance in system
design. Equation (4.10) provides a valuable insight into the concept of a
medium’s path radiance, but it remains a simplified model. The scope of
its validity will now be reviewed. For a viewer on the ground, the zenith
angle is defined as the angle between the direction of view and the ver-
tical. The atmospheric transmittance [Figure 4.12(a)] and path radiance
were calculated for a slant path to space for zenith angles ranging from
the vertical to the horizontal. The Modtran™ computer code was used to
calculate the transmittance and path radiance for the Tropical atmosphere.
This atmospheric model has a temperature of 300 K at ground level, de-
creasing with altitude. Figure 4.12(b) shows the Modtran™ path radiance
as well as the path radiance predicted by Equation (4.10). Four spectral re-
gions were used: the 1.5–2.5 µm, 3–5 µm, and 8–12-µm spectral windows,
as well as a 6–7-µm band in an absorption spectral region. At near zero
zenith angles, the simple model overpredicts path radiance for the 3–5-µm
and 8–12-µm spectral windows. For the 6–7-µm spectral band the fit is
perfect, and for the 1.5–2.5-µm spectral band the prediction is not even
on the graph. At near-horizontal views, the simple model predicts more
accurately for the three thermal bands.
The simple model in Equation (4.10) requires two conditions for accu-
racy: a dense medium and a homogeneous medium. Section 4.2.5 shows
that an optically thick gas medium has a high emissivity, and acts like a
surface radiator. A horizontal atmospheric slant path to space is reason-
ably optically thick, and the temperature is constant for a considerable dis-
tance, hence the approximation is good. For zero zenith angles the path is
initially optically thick, but with increasing altitude the atmospheric den-
sity decreases, as does the temperature, resulting in a somewhat poorer
fit. At small zenith angles the observer is looking through a warmer at-
mosphere at low altitudes and at a colder atmosphere at higher altitudes,
hence the lower path radiance. The simple model, as used here, only in-
cludes thermal radiation, and hence it is not suitable for modeling path
radiance in the 1.5–2.5-µm spectral band.
For systems operating in the visual or NIR part of the spectrum the
Optical Media 119

0.64
8 12 mm 1.5 2.5 mm
Transmittance

3.2 4.8 mm
0.32 Zenith angle

MODTRAN Tropical
6.0 7.0 mm
0
0 10 20 30 40 50 60 70 80 90
Zenith angle [deg]
(a)
100
(1-τ) Lbb(300 K) 8 12 mm
Radiance [W/(m2·sr)]

MODTRAN calculation
10 (1-τ) Lbb(300 K) 6 7 mm
MODTRAN calculation
(1-τ) Lbb(300 K) 3.2 4.8 mm
1
MODTRAN calculation
MODTRAN calculation 1.5 2.5 mm
0.1
0 10 20 30 40 50 60 70 80 90
Zenith angle [deg]
(b)
Figure 4.12 Atmospheric transmittance and path radiance to space, actual and simplified
model: (a) transmittance vs. zenith angle and (b) path radiance vs. zenith angle.

self-emission term Lth will be zero, so that


σLσ
L R = L0 e−γR + (1 − e−γR ). (4.23)
γ
Duntley 17 reports that experiments confirmed that Equation (4.23) is valid
for visual observation provided that the parameters are all weighted with
the visual response, and that the radiance terms are replaced with lumi-
nance terms.
For systems operating in the MWIR spectral range all of the terms
in Equation (4.10) must be kept because both particulate scattering and
path emission are present, especially under low-visibility (less than 5 km)
conditions:
σLσ + Lth
L R = L0 e−γR + (1 − e−γR ). (4.24)
γ
Due to all of the approximations made, Equation (4.24) is probably least
accurate in this spectral region, and if accurate calculations must be per-
formed, the differential equations should be solved numerically.
120 Chapter 4

1 Aerosol transmission taerosol


Molecular 27 °C, 75% relative humidity
transmission 23 km visibility rural aerosol
Transmittance

5 km path length, sea level

0,5 Low transmittance due


to water vapor in the
atmosphere

0
0.5 1 3 5 8 12
Wavelength [mm]
(a)
1
Molecular
emissivity
Emissivity

Low transmittance leads


0,5 to high emissivity
Emissivity calculated as the
ratio of thermal path radiance
divided by thermal radiation
at 27 °C
1 - taerosol
0
0.5 1 3 5 8 12
Wavelength [mm]
(b)
Figure 4.13 Atmospheric transmittance and emissivity: (a) M ODTRAN™-calculated trans-
mittance and (b) emissivity derived from M ODTRAN™-calculated path radiance.

For systems operating in clear air in the LWIR spectral range the scat-
tering coefficient σ and the term Lσ will be zero:
Lth
L R = L0 e−γR + (1 − e−γR ) (4.25)
α
= L0 e−γR + Lth (1 − e−γR ). (4.26)

For systems operating in severe aerosol both terms must be retained, and
the same reservation applies as for MWIR systems.

4.6.6 Practical consequences of path radiance

Figure 4.13(a) shows the spectral transmittance for a Modtran™ Tropical


atmosphere for a 5-km path length. The effect of aerosol scattering in the
visual spectral band is clearly visible. The transmittance graph applies to
a 5-km path at 23-km visibility — for poorer visibility, the transmission
is even less than shown here. In the infrared spectrum, carbon dioxide
Optical Media 121

and water vapor are particularly relevant. Because these molecules absorb
infrared energy, they also emit infrared energy.
The emissivity [Figure 4.13(b)] was determined as the ratio of the
Tropical Modtran™ predicted path radiance (thermal component only),
and a thermal radiator at 300 K (27 ◦ C):
Lpath thermal
 = , (4.27)
Le (300 K )
where the path radiance Lpath thermal was calculated by Modtran™, and Le
is Planck’s law for a 300-K source (the temperature of the Tropical atmo-
sphere). In the thermal spectral bands, this is a perfectly legal operation,
as defined in Equation (3.24).
Figure 4.13 shows that, even for the Tropical atmosphere at 27 ◦ C and
75% relative humidity, a 5-km path length results in an atmospheric emis-
sivity of 0.75 in the 8–12-µm band. An 8–12-µm thermal imager is therefore
trying to observe the target, looking through a veiling hot blackbody with
an emissivity of 0.75. The water vapor has much less effect on transmit-
tance in the 3–5-µm spectral band. The emissivity in the 3–5-µm band is
affected by CO2 at 4.3 µm.

4.6.7 Looking up at and looking down on the earth

Figure 4.14 shows the downward and upward radiance along a slant path.
The Modtran™ Tropical model and Rural aerosol, with 23-km visibility,
was used in this calculation. The path radiance term includes thermal path
radiance and single-scattered sunlight path radiance. The total path length
is 11.3 km, with its two endpoints at sea level and 8 km. Note the atmo-
spheric absorption around 4.3 µm. The Modtran™ Tropical model has
a sea-level temperature of 300 K. When looking down, the warm terrain
is observed in regions with good transmittance, and the cold atmosphere
is observed in spectral regions with poor transmittance. When looking
up, the warm atmosphere is observed in spectral regions of poor trans-
mittance (i.e., high emissivity), and the cold space is observed in spectral
regions with good transmittance. The positive–negative relationship be-
tween the looking-up and looking-down curves illustrate the principle of
atmospheric exitance in the absorption bands. These Modtran™ predic-
tions agree well with published measured data. 1

4.6.8 Atmospheric water-vapor content

Water vapor in the atmosphere is a very important consideration during


infrared system design. Figure 4.15 shows the vertical water-vapor content
122 Chapter 4

Transmittance along the path


0.9

0
Path radiance and terrain radiance
Tropical atmosphere 27 °C 300-K thermal source
1.0 Terrain background 27 °C
11.3-km slant path Looking up
Radiance [W/(m2·sr·mm)]

Looking down
0.1

Cold atmosphere
0.01
Warm
atmosphere

300 K terrain
0.001
2.5 3 3.5 4 4.5 5 5.5
Wavelength [mm]
25 20 18 15 14 13 12 11 10 9 8 7 6
120
100 (b) 20 km looking down
80
Radiance [mW/(m2·sr·cm–1)]

60 260
K
40 240
K
2 20 K
20 20
180 K 0 K
160 K
0
120
100 (c) Surface looking up
80
60
260
K
40 240
200 K K
180 K 220 K
20
160 K
0
400 500 600 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700
Wavenumber [cm 1]

Figure 4.14 Radiance up and down along a slant path: (a) M ODTRAN™ calculation, (b) mea-
sured (looking down), and (c) measured (looking up). Measured graphs (b), and (c) used
with permission. 1
Optical Media 123

101
Tropical 26.55 °C, 75.6% RH at 0 m ASL
Midlatitude Summer 21.05 °C, 76.2% RH at 0 m ASL

100
Water content [g/m3]

1976 US Standard
15 °C, 46% RH at 0 m ASL
10-1

Midlatitude Winter Subarctic Summer


0.95 °C, 77.1% RH at 0 m ASL
10-2 14.05 °C, 75.23% RH at 0 m ASL
Subarctic Winter
15.95 °C, 80.4% RH at 0 m ASL
10-3

10-4
0 2 4 6 8 10 12 14 16 18 20
Height [km]

Figure 4.15 M ODTRAN™ atmospheric water vapor content profiles.

for the standard Modtran™ atmospheric models. These profiles represent


‘typical conditions’ rather than the extremes that may occur. Also shown in
the figure are the temperatures and relative humidity values for the models
at sea level. Note the wide variability in water vapor in the troposphere
but the relatively homogenous content in the stratosphere.
Atmospheric humidity is commonly expressed in relative humidity
(RH). Relative humidity is the quantity of water vapor in the atmosphere,
expressed as a percentage of the maximum absolute humidity. The maxi-
mum absolute humidity is determined by the water-vapor partial pressure
at the atmospheric temperature and is given by
1325.252 7.5892( T −273.15)/( T −32.44)
10 q= , (4.28)
T
in units of [g/m3 ] and temperature in [K]. This equation has an error of
less than 1% over the range −20 ◦ C to 0 ◦ C, and less than 0.1% for the
temperature range 0 ◦ C to 50 ◦ C.
Figure 4.16 shows the atmospheric absolute humidity versus temper-
ature. Note how sharply the absolute humidity increases with increasing
temperature. At 30 ◦ C the atmospheric water content is almost double
the atmospheric water content at 20 ◦ C. Figure 4.16 also shows the effec-
tive transmittance for a 5-km atmospheric path with relative humidities of
60% and 95%, at various temperatures. For example, an atmosphere with
60% relative humidity at a temperature of 30 ◦ C has an effective transmit-
tance of 0.21. The highest recorded 25 absolute humidity was 37.5 g/m3 in
Sharjah, United Arab Emirates.
124 Chapter 4

1 200

Absolute humidity [g/m3]


Path length 5 km Absolute humidity
8-12 mm 60% relative humidity (RH)
(at 100% RH)
Effective transmittance

8-12 mm 95% RH

3-5 mm 60% RH
0.5 CO2 attenuation 3-5 mm 95% RH 100
affects 3-5 mm
Highest
H2O attenuation recorded
affects 8-12 mm
0 0
20 10 0 10 20 30 40 50 60 70
Atmospheric temperature [°C]

Figure 4.16 Effective atmospheric transmittance for a humid atmosphere: transmittance on


left, and absolute humidity on right.

4.6.9 Contrast transmittance in the atmosphere

Section 4.2.3 derives a general law for contrast reduction. This section in-
vestigates contrast transmittance for paths in the atmosphere. Three path
types are considered here: observations along an upward path, a down-
ward path, and a horizontal path.

4.6.9.1 Upward observations

In Equation (4.16) the Eb0 term is the sky radiance along the line of sight as
measured from the observer (to space), and EbR is the sky radiance along
the line of sight as measured from the object (to space). Combining Equa-
tions (4.10) and (4.16), the contrast transmittance for upward observations
is obtained as
 
− γRR∞
CR − γR 1 − e
τc = =e , (4.29)
C0 1 − e−γR0∞
∞
where R R∞ denotes the integral R = R f (r)dr along the line of sight, and
∞
R0∞ is the integral R = 0 f (r)dr [see Equation (4.6)]. For an upward path
the term in the square brackets is almost always less than unity because
there is a shorter path in the numerator and a longer path in the denomi-
nator. If the atmosphere along the path is uniform (which the atmosphere
is not), these integrals degenerate to the path lengths because f (r) is the
same constant value in both cases. The integration limit ∞ is not really
infinity because the atmosphere has finite extent. The symbol ∞ in this
context only indicates that the path runs into space.
Two important observations can be made from Equation (4.29): (1)
τc depends only on γ and R. In an atmosphere isotropic with respect
Optical Media 125

to azimuth angle, R only depends on the elevation angle, i.e., the angle
rising above the horizon. Consequently, for an object moving at a con-
stant elevation angle, the atmosphere will always have a constant contrast
transmittance. (2) The contrast transmittance is independent of the path
radiance phenomenon, scattering, or thermal emission.

4.6.9.2 Downward observations

Equations (4.10) and (4.16) can be combined 17 to yield


1
τc = σL σ + Lthermal
, (4.30)
1− γL0 (1 − eγR )
where the variables are defined in Section 4.2.2. Note that all of the vari-
ables are properties of the medium except L0 , which is a property of the
source. This equation applies for both scattered and thermally radiated
path radiance components.
In the visual spectral band the thermal radiance of the atmosphere is
insignificant, and Equation (4.30) can be simplified to
1
τc = τL σ
(4.31)
1− γL0 (1 − e )

1
= , (4.32)
1 − Kν (1 − eRγ )
where Kν is called the sky–ground ratio and to some degree resembles the
ratio of radiance values of the sky (as seen from the ground) and the
ground radiance (as seen from the sky). 26 Typical values of Kν are given
in Table 4.1. Note that these values are only applicable to observations in
the visual spectral band. Duntley 17 and Gordon 18 describe how to deter-
mine Kν . The sky–ground ratio has a very strong influence on the contrast
transmittance τc .
In the MWIR spectral band, Equation (4.30) cannot be simplified be-
cause the scattered and thermal path radiance components are similar in
magnitude.
In the LWIR spectral band, the scattering of the clear sky atmosphere is
insignificant, and Equation (4.30) can be likewise simplified. It is conve-
nient to define a sky–ground ratio for the infrared domain, 26
Lth
Kμ = . (4.33)
γL0
Typical values of the infrared sky–ground ratio are shown in Table 4.2.
If the source object is near the ground, L0 is the earth radiance, and the
126 Chapter 4

Table 4.1 Sky–ground ratios in the visual spectral band.

Sky condition Ground condition Kν


overcast fresh snow 1
overcast desert 7
overcast forest 25
clear fresh snow 0.2
clear desert 1.4
clear forest 5

Table 4.2 Sky–ground ratios in the infrared spectral bands.

Spectral band Kμ Kμ
3–5 µm 0.70 1.42
8–14 µm 0.85 1.17
10–12 µm 0.86 1.17
−10 K +10 K

downwards atmospheric path is short. If the atmospheric temperature is


similar to the earth’s temperature, the path radiance in opaque spectral
bands is similar to the terrain radiance. Hence, the observer sees the same
radiance, be it terrain or atmosphere.
The upward path to space through the atmosphere has relatively low
transmittance, with most of the lower atmosphere contributing to the path
radiance. 27 Kμ can therefore be approximated by the ratio of the atmo-
spheric boundary layer path radiance to the earth radiance. For look-down
observations, at targets near the earth’s surface, the atmosphere’s contrast
transmittance is then
1
τc = L σ ( Tair )
. (4.34)
1 − L ( T ) (1 − eRγ )
0 earth

The background radiance relative to the atmospheric radiance de-


pends on the earth and atmospheric temperatures. In the equilibrium sit-
uation, when the earth temperature equals the air temperature, the ratio
of exitance values is unity, and thus Kμ is unity. This is however a much-
simplified case because there is always a difference between the temper-
atures of objects and the air. The differences depend on the time of day,
wind velocity, material properties of the object, etc. Suits 28 estimates that
typical maximum differentials between the air and earthbound objects are
in the region of 10 K. Little Kμ data is published in the literature. For hy-
Optical Media 127

pothetical systems with square spectral passbands, the approximate values


for Kμ are listed in Table 4.2.

4.6.9.3 Horizontal observations

For horizontal paths the contrast transmittance is the limiting case of both
Equations (4.29) and (4.30), with the same result. For shorter distances
with small R, e−γRR∞ ≈ e−γR0∞ in Equation (4.29). For longer distances
e−γR → 0. In Equation (4.30) the background exitance is replaced by the
sky exitance, and Kν and Kμ become unity. Therefore, on the horizon, the
contrast transmittance is equal to the radiance transmittance:

τc = e− Rγ . (4.35)

4.6.10 Meteorological range and aerosol scattering

Aerosol types are often characterized by a range parameter called ‘meteo-


rological range’ or ‘visibility’. This parameter is defined as the range where
the atmosphere reduces the apparent contrast of a unity contrast target to
the foveal contrast threshold (Cv ) of the human eye. In other words, how
far can you see a large black object against a white background? A refer-
ence wavelength of 550 nm is used. In 1924, Koschmieder determined Cv
to be 0.02. From Equation (4.35) it follows that the meteorological range
RV is given by
− ln(0.02) 3.91
RV = = , (4.36)
σ550 nm σ550 nm
where γ = σ550 nm is the aerosol scattering coefficient. Note that RV and
σ550 nm must be in the same units.
The World Meteorological Organization determined a different con-
trast threshold, 29 CV = 0.05; with the new value being empirically deter-
mined for human observations in the real world. The value Cv = 0.05
leads to the form RV = 3/σ. Note that both forms are referred to by the
same name but yield different meteorological ranges. The Koschmieder
range is 30% further! It is therefore important to state which convention
is used. The Modtran™ aerosol-model meteorological range is defined
in the original Koschmieder Cv = 0.02 convention. 30 Hence, the value to
be used in Modtran™ must be 1.3±0.3 times the real-world human-eye
observation distance. See Section 9.2 for a numerical calculation of meteo-
rological range using a simple radiometric model.
Figure 4.17 shows the spectral scattering coefficient for various Mod-
tran™ aerosols. Also shown are the approximate aerosol modal diam-
128 Chapter 4

100
0.2 km met range convective fog (clouds)
(16 20 mm particle diameter)

»0.0 »0.0 »0.0 »0.0


Fog
10 1 km met range radiative fog (mist)

Transmittance for stated path lengths


(4 8 mm particle diameter)
Scattering coefficient (1/km)

Hazy

1 0.37 0.14 0.01 »0.0


5 km met range rural aerosol
(0.054 0.1 & 0.86 2.3 mm particle dia)
Light
haze

23 km met range
maritime aerosol 0 90 0.82 0 61 0.37
0.1

50 km met range
tropospheric aerosol
Visual clear sky

(0.05 0.1 mm particle dia) 0.99 0 98 0 95 0 90


0.01
23 km met range
rural aerosol
Molecular (Rayleigh)
(0.0002 mm dia) »1 0 »1 0 »1.0 0 99
0.001

1 2 5 10
`met range’ is Koschmieder meteorological range km km km km
0.0001
Ultraviolet Visible NIR SWIR MWIR LWIR
0.2 0.55 1 2 3 5 10 20
Wavelength (mm)

Figure 4.17 Spectral scattering attenuation coefficient for M ODTRAN™ aerosols and indica-
tive transmittance for various path lengths.

eters (most-common particle size for the specific aerosol). Shown on the
right side of the graph is the transmittance over a 1-km path, correspond-
ing to the scattering coefficient on the left side.
In Figure 4.17, from the bottom to the top, the aerosol size varies
from small to large. Clear-sky conditions (blue-sky Rayleigh scattering
and the 50-km meteorological range tropospheric atmosphere) affects the
ultraviolet, visual, and near infrared spectral bands. Light haze conditions
(23-km meteorological range) affect the MWIR and LWIR bands to some
extent. Finally, the large particle fog aerosol affects all of the visual and
infrared bands equally.
Meteorological range is defined in terms of human vision, not infrared
terms. Two aerosols may have the same meteorological range, but have
very different infrared scattering properties. Compare the 23-km Rural and
Maritime MWIR and LWIR aerosol scattering coefficients in Figure 4.17.
Even though the scattering coefficient is the same at 550 nm, it differs by a
factor of five in the MWIR band.
Optical Media 129

4.7 Atmospheric Radiative Transfer Codes

4.7.1 Overview

Atmospheric radiative transfer codes 31 are computer programs that cal-


culate properties of the atmosphere, pertaining to radiative flux transfer.
The codes are typically designed and focused toward a specific applica-
tion, but some codes have more general applicability. Some codes provide
specialist additional functionality such as solar or lunar irradiance or back-
ground scene images. The codes are mostly discrete ordinate models (see
Section 4.3). In this section the Modtran™ code is briefly summarized.

4.7.2 M ODTRAN™

Modtran™ (MODerate resolution TRANsmission) is an atmospheric code


to calculate the direct and diffuse transmission, the path radiance, trans-
mitted and top-of-atmosphere solar/lunar irradiances, and more, for a
specified path through the atmosphere. Modtran™ can be used as a
stand-alone program, but it can also be interfaced as a subroutine to larger
software systems. Modtran™ is available as source code or a compiled
binary file. 32 From the Modtran™ website: 21

Modtran™ is a ‘narrow band model’ atmospheric radiative trans-


fer code. The atmosphere is modeled as stratified (horizontally ho-
mogeneous), and its constituent profiles, both molecular and par-
ticulate, may be defined either using built-in models or by user-
specified vertical profiles. The spectral range extends from the UV
into the far-infrared (0–50,000 cm−1 ), providing resolution as fine
as 0.2 cm−1 . Modtran™ solves the radiative transfer equation in-
cluding the effects of molecular and particulate absorption/emis-
sion and scattering, surface reflections, and emission, solar/lunar
illumination, and spherical refraction.

The atmosphere is modeled as 36 discrete layers from sea level up to


100-km altitude. Layer thickness is not constant — it is smaller in the lower
atmosphere and increases in the upper atmosphere. Each layer is mod-
eled as a homogenous medium with an appropriate temperature, pressure,
molecular composition, and aerosol distribution. Six standard atmospheric
models are provided, with the vertical profiles shown in Figure 4.18. The
figure also shows the altitudes of the Modtran™ layers. The user can
add detailed new atmospheric models. These models need not comply
with the Modtran™ discrete layer definitions. The user can also specify
130 Chapter 4

Temperature Pressure Water vapor


100
Midlatitude Summer Subarctic Winter
Midlatitude Winter Tropical
90 Subarctic Summer 1976 US Standard

80

70
Height [km]

60

50

40

30

20

10

0
1
150 180 210 240 270 300 10-3 10-2 10-1 100 10 102 103 10-9 10-6 10-3 100
Temperature [K] Pressure [mbar] 3
Water content [g/m ]

Figure 4.18 Standard M ODTRAN™ atmospheric models’ discrete layer altitudes and vertical
profiles.

custom aerosol distributions.


Transmittance is calculated using molecular absorption coefficients,
line density parameters, and average absorption line widths, all of which
are temperature dependent over atmospheric temperature ranges. Aerosol
scattering is calculated using aerosol size distribution and humidity. Stan-
dard Modtran™ aerosol models include Urban, Rural, Maritime, Fog,
and Rain, which also provide for seasonal variation in the aerosol dis-
tribution at higher altitudes. Molecular continuum absorption, molecu-
lar scattering, and aerosol absorption and scattering are also calculated.
Modtran™ calculates the path radiance component from thermal self-
emission, solar and lunar scatter into the path, direct solar irradiance, and
multiple-scattered solar or self-emitted radiance.
Path geometry calculation includes the earth curvature spherical re-
fraction, accurately calculated as a function of zenith angle. The atmo-
spheric amounts (molecular and aerosol) are calculated along the slanted
path lengths in each layer of the model. The code makes provision for
horizontal paths, slanted paths, paths to space, and paths from space. The
user has several alternative options to define the paths.
Optical Media 131

Bibliography
[1] Petty, G. W., A First Course in Atmospheric Radiation , Sundog, Madison,
WI (2006).

[2] Farmer, W. M., The Atmospheric Filter: Volume I Sources, JCD Publish-
ing, Winter Park, FL (2001).

[3] Farmer, W. M., The Atmospheric Filter: Volume II Effects , JCD Publish-
ing, Winter Park, FL (2001).

[4] Liou, K. N., An Introducton to Atmospheric Radiation, Academic Press,


San Diego, CA (2002).

[5] Kondratyef, K. Y., Ivlev, L. S., Krapvin, V. F., and Varatsos, C. A.,
Atmopsheric Aerosol Properties, Springer Praxis, Berlin (2006).

[6] Bohren, C. F. and Clothiaux, E. E., Fundamentals of Atmospheric Radia-


tion: An Introduction with 400 Problems , Wiley-VCH, New York (2006).

[7] Wyngaard, J. C., Turbulence in the Atmosphere, Cambridge University


Press, Cambridge, UK (2010).

[8] Smith, F. G., Ed., The Infrared and Electro-Optical Systems Handbook:
Atmospheric Propagation of Radiation , Vol. 2, ERIM and SPIE Press,
Bellingham, WA (1993).

[9] Andrews, L. C., Field Guide to Atmospheric Optics, SPIE Press, Belling-
ham, WA (2004) [doi: 10.1117/3.549260].

[10] Andrews, L. C. and Phillips, R. L., Laser Beam Propagation


through Random Media, SPIE Press, Bellingham, WA (2005) [doi:
10.1117/3.626196].

[11] Lukin, V. P. and Fortes, B. V., Adaptive Beaming and Imaging in


the Turbulent Atmosphere, SPIE Press, Bellingham, WA (2002) [doi:
10.1117/3.452443].

[12] Mayer, B., Emde, C., Buras, R., and Kylling, A., “libRadTran — library
for radiative transfer,” http://www.libradtran.org.

[13] Nikolaeva, O. V., Bass, L. P., Germogenova, T. A., Kuznetsov, V. S.,


and Kokhanovsky, A. A., “Radiative transfer in horizontally and vertically
inhomogeneous turbit media,” Light Scattering Reviews 2: Remote Sens-
ing and Inverse Problems , 295–341, Praxis Publishing, Chichester, UK
(2007).
132 Chapter 4

[14] Boyd, R. W., Radiometry and the Detection of Optical Radiation , John
Wiley & Sons, New York (1983).

[15] Niemz, M. H., Laser–Tissue Interactions: Fundamentals and Applications,


Springer Verlag, Berlin (2007) [doi: 10.1007/978-3-540-72192-5].

[16] Džimbeg-Malčić, V., Barbarić-Mikočević, Ž., and Itrić, K., “Kubelka-


Munk theory in describing optical properties of paper (1),” Technical
Gazette (Tehnički vjesnik) 18(1), 117–124 (2011).

[17] Duntley, S. Q., “The Reduction of Apparent Contrast by the Atmo-


sphere,” Journal of the Optical Society of America 38(2), 179–191 (1948).

[18] Gordon, J. I. and Duntley, S. Q., “Measuring Earth-to-Space Contrast


Transmittance from Ground Stations,” Applied Optics 12(6), 1317–1324
(1973).

[19] Turner, R. E., “Contrast Transmittance in Cloudy Atmospheres,” Proc.


SPIE 305, 133–142 (1981) [doi: 10.1117/12.932706].

[20] Justus, C. G. and Paris, M. V., “Modelling Solar Spectral Irradiance


and Radiance at the Bottom and Top of a Cloudless Atmosphere,”
tech. rep., School of Geophysical Sciences, Georgia Institute of Tech-
nology (1987).

[21] Spectral Sciences Inc. and U. S. Air Force Research Laboratory,


“MODTRAN,” modtran5.com.

[22] Jursa, A. S., Handbook of Geophysics and the Space Environment ,


Vol. NTIS Document number ADA-167000, USAF Geophysics Lab-
oratory (1985).

[23] U.S. Environmental Protection Agency, “Characteristics of Particles,”


http://www.epa.gov/apti/bces/module3/collect/collect.htm.

[24] Palmer, J. M. and Grant, B. G., The Art of Radiometry, SPIE Press,
Bellingham, WA (2009) [doi: 10.1117/3.798237].

[25] MIL-HDBK-310, “Global Climatic Data for Developing Military Prod-


ucts,” Tech. Rep. MIL-HDBK-310, Department of Defense (1997).

[26] O’Brein, S. G. and Shirkey, R. C., “Determination of Atmospheric Path


Radiance: Sky-to-Ground Ratio for Wargamers,” Tech. Rep. ARL-TR-
3285, Army Research Laboratory (2004).

[27] Baker, D. J. and Pendleton Jr., W. R., “Optical Radiation from the
Atmosphere,” Proc. SPIE 91, 50–62 (1976) [doi: 10.1117/12.955071].
Optical Media 133

[28] Suits, G., “Radiative Transfer I, IR Technology Course,” tech. rep.,


University of Michigan/ERIM. (unknown).

[29] Prokes, A., “Atmospheric effects on availability of free space optics


systems,” Optical Engineering 48(6) (2009) [doi: 10.1117/1.3155431].

[30] Kneizys, F. X., “Users Guide to LOWTRAN 7,” Tech. Rep. AFGL-TR-
88-0177, Air Force Systems Command, USAF (1988).

[31] Wikipedia, “Atmospheric radiative transfer codes,”


http://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_
codes.

[32] Ontar, “PcModWin Manual Version,” tech. rep., Ontar Corporation


(2001).

[33] Kneizys, F. X., Shettle, E. P., Gallery, W. O., Chetwynd, J. H., Abreu,
L. W., Selby, J. E. A., Clough, S. A., and Fenn, R. W., “Atmospheric
Transmittance/Radiance: Computer Code LOWTRAN 6,” Tech. Rep.
AFGL-TR-83-0187, Air Force Systems Command, USAF (1983).

[34] Roebeling, R. A., Jolivet, D., Macke, A., Berk, L., and Feijt, A., “Inter-
comparison of models for radiative transfer in clouds,” 11th Conference
on Atmospheric Radiation and the 11th Conference on Cloud Physics (2002).

Problems

4.1 Explain what thermal radiation is, why it occurs, how it can be
calculated, and how it is affected by the atmosphere. [5]
4.2 Describe what atmospheric aerosols are and what effect they have
on optical and infrared systems operating in the spectral range 1–
12 µm. Describe the effects of clean air and fog over the full extent
of this spectral range. [5]
4.3 Provide a description of each of the following terms; explain what
they are and how they work: (a) atmospheric transmittance, (b) at-
mospheric path radiance, (c) Rayleigh and Mie scattering, (d) dis-
crete ordinate models, (e) molecular absorption, and (f) aerosol
scattering. [12]
4.4 Start from first principles and derive the transmittance for a path
with length R in a homogeneous medium with attenuation coeffi-
cient γ. [4]
134 Chapter 4

If the spectral transmittance for a homogenous medium is known


for one path length R1 , show how the transmittance for another
path length R2 can be determined. [2]
4.5 Calculate and plot the blackbody exitance for temperatures of 200 K
to 6000 k in the following spectral bands: (a) 0 to infinity, (b) 0.4–
0.75 µm, (c) 1.5–2.5 µm, (d) 3–5 µm, (e) 3.8–4.5 µm, (f) 4.8–5.2 µm,
(g) 8–12 µm, and (h) 8–14 µm. Comment on the results. [5]
Repeat the calculation but now evaluate the effect of spectral at-
mospheric transmittance. Use the data in the DP03.zip data file.
[8]
4.6 The aerosol light scatter phase function P is the probability that
 scattered in a given direction (θ, ϕ). As a probability func-
light is
tion, sphere P(θ, ϕ)dΩ = 1. Modtran™ provides the user an op-
tion to use the Henyey–Greenstein scattering phase function as an
approximation of Mie scattering. 33 The function is given by

1 − g2
PHG (θ ) =  , (4.37)
4π 3/2
1 − 2g cos θ + g2
where θ is the scattering angle, and g is the asymmetry parameter;
where g = +1 provides complete forward scattering, g = 0 rep-
resents isotropic scattering, and g = −1 provides complete back
scattering. Plot this function for various values of g and compare
the results with Figure 4.10. Also plot the simplified Rayleigh scat-
tering function, 33

3(1 + cos2 θ )
PR (θ ) = . (4.38)
16π
Comment on your observations. [3]
The Henyey-Greenstein function is considered to be an inaccurate
model of real aerosol and was replaced with a Legendre polyno-
mial function. 34 Explain why. [2]
Chapter 5
Optical Detectors

The source of all light is in the eye.


Alan Wilson Watts

Cornelius J. Willers
Ricardo Augusto Tavares Santos, D.Sc.
Instituto Tecnológico de Aeronáutica, S. José dos Campos - Brazil.
Fábio Durante Pereira Alves, D.Sc.
Naval Postgraduate School, Monterey - USA.

5.1 Historical Overview

In 1800 William Herschel discovered infrared flux using a thermometer


as the first infrared (IR) detector. In his experiments, a prism was used
to refract sunlight. A thermometer placed just outside the red edge of
the spectrum indicated a higher temperature than in the rest of the room.
Early IR detectors exploited the Seebeck thermoelectric effect used in the
first thermocouple devices.
The origins of modern IR detector technology can be traced to the 20th
century, during World War II, when photon detectors were developed. 1
Since World War II, IR detector technology development was and contin-
ues to be primarily driven by military applications, although in the last few
decades its application in civilian fields such as medicine, quality control,
anti-threat systems, and industrial processes, among others, has grown
substantially. This diversity of applications, as well as the advances in the
semiconductor sciences and fabrication processes, leads to cost-effective
devices and systems, placing IR technology in current daily life. When a
new system is brought to the market today, the design specifications often
consider ‘dual deployment,’ targeting civil and military applications.

135
136 Chapter 5

Optical detectors are used as components in electro-optical sensor sys-


tems (see Chapter 1). The broader IR technology field concerns itself with
the study of how a heated source radiates energy, how this radiation prop-
agates through a medium, how it interacts with matter, and finally, how it
is detected.
This chapter provides an introduction to IR detectors. The focus is on
concepts and principles and not on specific detector materials or technolo-
gies. The classical, first-order theory presented here is suitable for basic
understanding but does not cover advanced concepts or secondary effects.
Starting with the physics of light absorption, the focus shifts to detector
types, noise, thermal detectors, and photon detectors.

5.2 Overview of the Detection Process

The optical detection process occurs by one of two mechanisms: photon


absorption and thermal energy absorption. These mechanisms lead to a
number of different kinds of detectors. 2–7 Table 5.1 gives an overview on
IR detectors.

5.2.1 Thermal detectors

Thermal detectors respond to the heating effects of absorbed optical radia-


tion by changing the temperature of the sensor, which causes (or induces)
changes in a measurable parameter, e.g., resistance, polarization, or volt-
age. Thermal detectors are generally slower (the thermal processes tend
to be slower) and have lower sensitivity than photon detectors. Thermal
detectors’ spectral response can be wide if a wideband detector coating is
used. Thermal detectors were not traditionally used in high-performance
electro-optical systems. Uncooled thermal detectors are becoming less ex-
pensive yet more sensitive, and find increasing use in applications previ-
ously reserved for photon detectors, such as thermal imaging cameras.
Thermal detector responsivity is defined in terms of the detector sig-
nal id divided by optical incident radiant flux Φe :
id α kΦ
Reλ = = λ e = αλ k, (5.1)
Φe Φe
where Reλ has units of [A/W] or [V/W], depending on the device, αλ is
the spectral absorption (emissivity), and k is a conversion constant. Equa-
tion (5.1) indicates that a certain amount of optical flux, expressed in [W],
causes a certain output signal id in [A]. The device’s spectral response is
only a function of the spectral absorptance of the flux and not of the in-
ternal physics of the detector. Because the spectral absorptance is approx-
Optical Detectors
ŽŽŒ˜› ¢™Ž ŸŠ—ŠŽœ ’œŠŸŠ—ŠŽœ

‘Ž›–Š• ˜•˜–ŽŽ›ǰ ™¢›˜Ž•ŽŒ›’Œǰ ‘Ž›–˜™’•Ž ’‘ǰ ›žŽǰ ›Ž•’Š‹•Žǰ Š— •˜  Œ˜œǯ ˜  ŽŽŒ’Ÿ’¢ Š ‘’‘ ›ŽšžŽ—Œ¢ǯ
˜˜– Ž–™Ž›Šž›Ž ˜™Ž›Š’˜—ǯ •˜  ›Žœ™˜—œŽ ǻ–’••’œŽŒ˜—œǼǯ

‘˜˜— —›’—œ’Œ Ȭ Ž••Ȭž—Ž›œ˜˜ǰ –˜›ŽȬœŠ‹•Ž –ŠŽ›’Š•œǯ Ž›¢ ‘’‘ ‘Ž›–Š• Ž¡™Š—œ’˜— Œ˜Ž’Œ’Ž—ǯ
‹ǰ ‹Žǰ ‹—Ž Šœ’Ž› ˜ –Š—žŠŒž›Žǯ Š›Ž ™Ž›–’’Ÿ’¢ǯ

Table 5.1 Comparison of detector types. 8


Ȭ Šœ¢ ‹Š—Š™ Š’•˜›’—ǯ ˜—ž—’˜›–’¢ ˜ŸŽ› •Š›Ž Š›ŽŠǯ

Ž Ž••ȬŽŸŽ•˜™Ž ‘Ž˜›¢ Š— ŽŒ‘—˜•˜¢ǯ
’‘ Œ˜œ ’— ›˜ ‘ Š— ™›˜ŒŽœœ’—ǯ
ž•’Œ˜•˜› ŽŽŒ˜›œǯ ž›ŠŒŽ –ŠŽ›’Š• Œ˜–™˜œ’’˜— ’—œŠ‹’•’¢ǯ
Ȭ ˜˜ –ŠŽ›’Š• ǭ ˜™Š—œǯ
ŽŽ›˜Ž™’Š¡¢  ’‘ •Š›Ž •Š’ŒŽ –’œ–ŠŒ‘ǯ
— Šœǰ —œǰ —‹ǰ ŸŠ—ŒŽ ŽŒ‘—˜•˜¢ǯ ˜—  ŠŸŽ•Ž—‘ Œž˜ •’–’Ž ˜ ŝ – ǻŠ
—œ‹ ˜Ž—’Š• –˜—˜•’‘’Œ ’—Ž›Š’˜—ǯ ŝŝ Ǽǯ

¡›’—œ’Œ ’DZ —ǰ ’DZ Šǰ ’DZœǰ ™Ž›Š’˜— Š ŸŽ›¢ •˜—  ŠŸŽ•Ž—‘œǯ


’‘ ‘Ž›–Š• ŒŠ››’Ž› Ž¡Œ’Š’˜—ǯ
ŽDZžǰ ŽDZ
 Ž•Š’ŸŽ•¢ œ’–™•Ž ŽŒ‘—˜•˜¢ǯ ¡›Ž–Ž•¢ •˜  ˜™Ž›Š’— Ž–™Ž›Šž›Žǯ

›ŽŽ ŒŠ››’Ž›œ ’ǰ Ř’ǰ ›’ ˜ ȬŒ˜œǰ ‘’‘Ȭ¢’Ž• ŽŒ‘—˜•˜¢ǯ ˜  šžŠ—ž– Ž’Œ’Ž—Œ¢ǯ
Š›Ž Š— Œ•˜œŽȬ™ŠŒ”Ž Š››Š¢œǯ ˜  Ž–™Ž›Šž›Ž ˜™Ž›Š’˜—ǯ

žŠ—ž–  Ž••œǰ ŠœȦ• Šœǰ Šž›Ž –ŠŽ›’Š• ›˜ ‘ǯ


’‘ ‘Ž›–Š• ŒŠ››’Ž› Ž¡Œ’Š’˜—ǯ
¢™Ž ŗ — ŠœȦ• Šœ ˜˜ ž—’˜›–’¢ ˜ •Š›Ž Š›ŽŠǯ ˜–™•’ŒŠŽ Žœ’— Š— ›˜ ‘ǯ
ž•’Œ˜•˜› ŽŽŒ˜›œǯ

žŠ—ž–  Ž••œǰ —œȦ — Š‹ǰ ˜  žŽ› ›ŽŒ˜–‹’—Š’˜— ›ŠŽǯ ˜–™•’ŒŠŽ Žœ’— Š— ›˜ ‘ǯ
¢™Ž —œȦ —œ‹ Šœ¢  ŠŸŽ•Ž—‘ Œ˜—›˜•ǯ Ž—œ’’ŸŽ ˜ ‘Ž ’—Ž›ŠŒŽœǯ

žŠ—ž– ˜œ —œȦ Šœǰ — ŠœȦ ˜›–Š• ’—Œ’Ž—ŒŽ ˜ •’‘ǯ ˜–™•’ŒŠŽ Žœ’— Š— ›˜ ‘ǯ
— Šǰ ŽȦ’ ˜  ‘Ž›–Š• ŒŠ››’Ž› Ž¡Œ’Š’˜—ǯ

137
138 Chapter 5

Spectral response
Thermal
detector

Photon
detector
λcutoff
Wavelength

Figure 5.1 Spectral response comparison between photon and thermal detectors.

imately flat in the operational band, the responsivity curve is therefore


approximately spectrally flat, as shown in Figure 5.1.

5.2.2 Photon detectors

Photon detectors rely on the absorption of a photon in a semiconductor or


on an emissive surface, resulting in the release of electrons. Photon detec-
tors have high sensitivity (good SNR) and fast response times — ideal for
high-performance sensors. There is a very wide variety of photon detector
technologies, the more important of which include intrinsic and extrinsic
detectors, photoemissive detectors, and quantum well detectors (QWIPs). 4
Photon detectors operating in the IR region must be cooled down to cryo-
genic temperatures for low-noise operation. High-performance photon de-
tectors can be expensive.
Photon detector devices detect light by a direct interaction of the ra-
diation with the atomic lattice of the material. This interaction produces
voltage or current changes that are detected by associated circuitry or in-
terfaces. The photon detector is a semiconductor device with a bandgap
between the conduction band and the valence band. Photons with an en-
ergy exceeding the bandgap of the detector material are absorbed in the
material. The energy released during absorption elevates an electron to the
conduction band. Free electrons in the conduction band result in current
flow, which is sensed on the detector terminals. Because photon energy
increases toward shorter wavelengths, there exists a (low-energy) maxi-
mum wavelength λc beyond which the detector is not sensitive. The cutoff
wavelength is given by
1.24
λc = , (5.2)
Eg
where λc is in [µm], and the bandgap Eg is in electron-volts.
Optical Detectors 139

The bandgap in a semiconductor depends on complex physical pa-


rameters. One of the commonly used empirical models to calculate the
semiconductor bandgap is the Varshni’s approximation:
 
AT 2
E g = E g ( 0) − , (5.3)
T+B
where Eg (0) is the semiconductor bandgap at 0 K and A and B are Varshni
material parameters. 9–12 There are also other empirical models for this
purpose. 10
Because the detector converts photons into electronic charges, the pho-
ton detector output signal is proportional to the number of incident pho-
tons. The responsivity can therefore be described in terms of the incoming
photon flux (photons per second) as

id ηqG for λ ≤ λc
Rqλ = = , (5.4)
Φq 0.0 for λ > λc
where Φq is the incoming flux in quanta per second [q/s], id is the detector
current [C/s], η is the spectral quantum efficiency of the detector (unitless),
G is the gain of the detector material (unitless) [see Equation (5.103)], q is
the electronic charge in [C], and the responsivity Rqλ has units of [(C/s)
/(1/s)]=[C], in effect [C/quanta]. Quantum efficiency is the fraction of in-
cident photons converted to electrons contributing to the measured signal.
The gain G for photovoltaic detectors is unity because one photon creates
one free electron in the junction. The gain for photoconductive detectors
depends on material properties and detector design, and can be smaller or
larger than unity.
From Equation (5.4) the radiant spectral responsivity of the photon
detector, defined in terms of watts, can be found by multiplying the photon
flux Φq with the energy per photon Q = hν = hc/λ, then Φe = Φq hc/λ
and
 ηqλG
id id for λ ≤ λc
Reλ = = hc = hc , (5.5)
Φe λ Φq
0.0 for λ > λc
where Φe is the incoming flux [W], id is the detector current [C/s] or [A],
q is the electron charge [C], η is the spectral quantum efficiency of the
detector (unitless), λ is wavelength in [m], G is the gain of the detector
material (unitless), h is Planck’s constant [J·s], c is the speed of light in a
vacuum [m/s], and Reλ has units of [C/J], which is the same as [A/W].
Short-wavelength photon detectors (e.g., silicon) are sensitive in the
visual and NIR spectral bands. Silicon detectors are inexpensive and man-
ufactured in large volumes for consumer electronics. Longer-wavelength
140 Chapter 5

photon detectors (e.g., InSb and HgCdTe) must be cryogenically cooled to


prevent thermal carrier excitation across the small semiconductor energy
bandgap.

5.2.3 Normalizing responsivity

For convenience, the spectral responsivity can be written as a spectral


quantity multiplied by a scaling factor R = R R , where R is the scal-

ing factor with units [A/W] or [V/W], and R is the spectral function with
maximum value equal to unity. If a measured detector response is not
available, an approximation such as Equation (D.5) can be used to calcu-
late spectral responsivity values. The spectral response of a detector can
also be calculated using the theory provided in this chapter, as shown in
Figure 5.30 in Section 5.9.6.

5.2.4 Detector configurations

Detectors are found in a range of configurations, from single-element de-


vices, to linear vectors (N×1) or (N×few), to large two-dimensional arrays
(N×M). Detector arrays are commonly found in digital cameras, with sizes
up to several tens of megapixels.
Detectors can cover a single spectral band or multiple spectral bands.
Multi-color detectors can have detector elements with different colors next
to each other (as in a digital camera) or behind each other (the front short-
wave detector element is transparent for longer-wavelength IR radiation).

5.3 Noise

This section provides a very brief coverage of noise, as pertaining to ther-


mal and photon detectors. 2,7,12–16 Sensor and electronics noise can be group-
ed into several categories: stochastic noise processes of physical (atomic)
origin, phenomenological noise (1/ f noise & electrical contact noise), and
‘system’ noise originating as a result of imperfections in an electronic sys-
tem. There is also another noise category: originating outside the sen-
sor but inherent in the signal itself. 2,13,17–19 The different noise sources are
normally uncorrelated with each other, being caused by independent pro-
cesses.
Physical noise processes are inherent in any natural process or electronic
component such as the detectors or transistors. Like optical aberrations
and optical defects, these noise processes are mathematically and physi-
cally part of nature itself. Similarly, some noise processes can be derived
Optical Detectors 141

and expressed in rigorous mathematical terminology, whereas other pro-


cesses are not well understood. Physical noise sources include Johnson
(thermal or Nyquist) noise, shot noise, temperature-fluctuation noise, and
generation–recombination (g-r) noise. Johnson noise results from ran-
dom charge movement due to thermal agitation in a conductor. Shot
noise arises from the statistical occurrence of discrete events, such as when
charges cross a bandgap potential barrier in a semiconductor. Generation–
recombination noise occurs when electron-hole pairs (with finite carrier
lifetime), are generated or recombined in a semiconductor.
Defying physical explanation, phenomenological noise with 1/ f fractal
power spectrum is ever present in observations of physical and natural
events. The noise spectrum has the form a/ f β , where a is a constant, f
is electrical frequency, and β is a constant usually equal to one or two.
1/ f noise is present in detectors, electronic circuits, flow processes such as
natural rivers and traffic, biological processes, and music. 20
System noise sources include interference noise, fixed pattern noise in
detector arrays, and microphonic noise. Interference is caused by external
events injecting spurious electrical signals by capacitive, inductive, or earth
loop coupling into an electronics circuit. Interference noise cannot readily
be calculated from first principles, but it can be modeled as an additional
noise source with magnitude ke NEP, with ke based on measurement or es-
timates, and NEP is defined in Section 5.3.11. Fixed pattern noise occurs in
multiple element detector arrays, emanating from statistical variations be-
tween individual detector elements. Assuming a uniform illumination on
all pixels in the focal plane, some pixels will provide a stronger signal than
others. This difference is attributable to the nonuniformity in the detector
responsivity amongst the various detector elements. Individual detector
elements can have different absolute responsivity values, different spectral
responsivity values, and/or nonlinear responses. Fixed pattern noise can
be measured and modeled using the principles of three-dimensional noise
analysis. 21 Microphonic and triboelectric noise results from minute me-
chanical deformation of an electronic device or conductors, causing signal
generation in piezoelectric or triboelectric materials, or variation in device
capacitance.

5.3.1 Noise power spectral density

Noise in a detector or electronic circuit can be expressed as a voltage v or


a current i, the choice of which depends on the impedance levels in the
design or the nature of the signal. Both forms are used in this book. In
its more-fundamental form, noise is expressed as a power spectral density
142 Chapter 5

(PSD), with units of [W/Hz], [A2 /Hz], or [V2 /Hz]. It quantifies the con-
tent of the noise signal over a small bandwidth (say, 1 Hz) at a particular
frequency. In this case the spectral domain is not optical wavelength but
the temporal variation of a signal expressed in units of [Hz] (cycles per
second). The√noise density√ can also be expressed in volts or current with
units of [A/ Hz] or [V/ Hz].
The PSD of a time signal f (t) is given by
 
| FT (ω; β)|2
S f (ω ) = lim , (5.6)
T →∞ 2T
where the expected value  is taken over the ensemble of signals β (not
time t), and FT (ω; β) is the Fourier transform of the time signal f (t):
 T
FT (ω; β) = f (t; β)e−iωt dt. (5.7)
−T

In effect this means that several (as in a very large number) infinitely
long sequences of the time signal f (t) are transformed by the Fourier trans-
formation and multiplied with their complex conjugates. The ensemble
PSD is then obtained by averaging the individual spectra.
White noise has a constant PSD at all frequencies (spectrally flat). In
contrast, band-limited noise may have an arbitrary PSD, depending pri-
marily on the frequency response of the electronic filter used to define
the bandwidth. 1/ f noise has constant noise power per frequency octave
(frequency ratio of 2).
The integral of the PSD over all temporal frequencies yields the total
power in the signal:
 ∞
P= S f (ω )dω. (5.8)
−∞
Noise can therefore also be expressed in an integrated form with units of
[W], [A2 ], or [V2 ]. Depending on the context and the units used, it should
be clear whether the spectral or integral values are used.
Note that the power discussed here is electrical power in an elec-
tronic circuit, which is not the same as optical flux power. They share the
same unit and fundamental concept of power but have different contextual
meanings.

5.3.2 Johnson noise

Johnson noise is also known as Nyquist or thermal noise. 2,22 It is caused by


the random Brownian motion of carriers in a conductor with nonzero resis-
Optical Detectors 143

tance at nonzero temperatures. Johnson noise is only generated in the dis-


sipative real component of complex impedances, present in almost all elec-
tronic components. Johnson noise can be considered as a one-dimensional
form of blackbody radiation because its derivation is based on the density
of states (see also Sections 3.1 and 5.5.3). 23 Johnson noise power spectral
density (for positive frequencies) is given by
 
x
vn = 4kTR
2
and (5.9)
ex − 1
 
4kT x
i2n = , (5.10)
R e −1
x

where vn and in are the rms noise voltage and current power spectral den-
sities in [V2 /Hz] and [A2 /Hz], respectively. k is the Boltzmann constant, T
is the temperature in [K] of the resistive element with value R in [Ω], and
x = h f /(kT ), where f is the electrical frequency. The term
x
(5.11)
e −1
x

describes the frequency spectrum of the noise — note its similarity with
the Planck-law formulation. h f is the energy of a particle at frequency f ,
whereas kT is the kinetic energy of a particle at temperature T. Beacuse
ex = 1 + x + x2 /2! + x3 /3! + . . ., it follows that for small x the term has
unity value. Investigation shows that the Johnson noise spectrum is flat up
to frequencies of the order of 1012 Hz.
The probability density function of unfiltered Johnson noise is Gaus-
sian with variance i2n Δ f or v2n Δ f , where Δ f is the electronic noise band-
width.

5.3.3 Shot noise

Shot noise is the disturbance caused by a constant presence of random


discrete events. Examples of such events are the flow of carriers across a
potential barrier, the detection of single photons in very low light level ob-
servations, or hail stones on a sheet metal roof. Shot noise is generated in
all photovoltaic and photoemissive detectors. It is not found in photocon-
ductive detectors. Being the accumulation of discrete events, the PDF of
shot noise is Poisson. The shot noise equation is equally valid for photons,
electrons, or holes. Note that shot noise is independent of temperature.
At low frequencies (and for all practical purposes) shot noise PSD is
given by
i2n = 2qI, (5.12)
144 Chapter 5

where i2 has units of [A2 /Hz]. I is the average current in [A] or [q/s],
and q is the charge on an electron [C]. This equation is easily derived from
the Poisson statistics, which states that the variance in events is equal to
the mean of the number of events Sq (0) = σq2 = aq , where the subscript q
denotes quanta (electrons or photons). If q is the charge of one electron, it
follows that
Si (0) = q2 Sq (0) = 2q2 σq2 = 2q2 a = 2qI, (5.13)
where Si (0) and Sq (0) are the low-frequency PSDs of the current and quan-
tum rates, respectively. The factor 2 is introduced to allowfor positive
ω ω
bandwidths only, i.e., integrating Si as 0 b Si (ω )dω instead of −ωb Si (ω )dω.
b

For large numbers of events, the Poisson distribution approximates a


Gaussian distribution. This means that shot noise has a Gaussian distri-
bution for large photon-flux rates. The distribution is not Gaussian when
small event numbers are considered, such as low light level at visual or
ultraviolet wavelengths.
The autocorrelation between consecutive events is the Dirac delta func-
tion (i.e., no correlation), hence the PSD of shot noise is spectrally flat.

5.3.4 Generation–recombination noise

Generation–recombination noise is caused by the creation and recombi-


nation of electron-hole pairs in a photoconductive detector. The incom-
ing photon flux creates free carriers that increase the conductance of the
detector. However, the finite lifetime of these carriers means that they
recombine, resulting in a decrease of the conductance. The net effect is
that the continuous generation and recombination of free carriers result
in noise in the signal. In photoconductive detectors, the g-r noise is the
primary noise contributor at medium-to-high frequencies. Generation–
recombination noise is not found in most photovoltaic detectors because
the carriers are quickly swept out of the junction before recombination.
Some materials do have g-r noise in photovoltaic devices.
The generation of free carriers can be caused by two processes: ther-
mally excited carriers or optically excited carriers. Thermally excited car-
riers are created by virtue of the energy levels in the material due to its
operating temperature. At higher temperatures, more thermally excited
carriers are generated (kT). The rms g-r noise current in a photon detector
is given by 2

igr = 2qG ηEq AΔ f + gth Ad Δ f lx , (5.14)

where q is the charge on an electron in [C], G is the photoconductive gain


Optical Detectors 145

(unitless), η is the detector quantum efficiency (unitless), Eq is the photon


flux on the detector in [q/(s·m2 )], Ad is the area of the detector in [m2 ], Δ f
is the noise equivalent bandwidth in [Hz], gth is the rate of thermal carrier
generation, and l x is the thickness in the optical propagation direction in
[m].
The g-r noise power spectral density has the general form
t2
S∝ (5.15)
1 + f 2 t2
where t is the carrier lifetime in [s], and f is the electrical frequency. Note
that the carrier lifetime introduces a pole in the transfer function, resulting
in a roll-off at higher frequencies. The high-frequency roll-off is not as high
as for shot or thermal noise because the carrier lifetimes are much longer
than the brief events associated with shot noise and thermal noise.

5.3.5 1/ f noise

1/ f noise is a term loosely applied to a broad range of noise sources,


most of which are not well understood. 1/ f noise is found in virtually all
detectors, music, stock prices, rainfall records, literature, and other fractal
and natural signals. The very deep origins of 1/ f noises are not well
understood in most cases. Electronic manifestations of 1/ f noise include
contact noise, flicker noise, and excess noise. 1/ f noise causes detector
noise at frequencies up to several kHz. The noise PSD is given by
k1 I α
i2n = , (5.16)

where i2n has units [A2 /Hz], k1 is a constant, I is the (mostly DC) current
through the device, 1.25 < α < 4 (usually 2), and 0.8 < β < 3 (usually 2).

5.3.6 Temperature-fluctuation noise

Even in thermal equilibrium, the particle movement in an object causes


temperature fluctuations around the mean temperature. The temperature
distribution is white noise Gaussian with a mean value of T and variance 24
kT 2
(ΔT )2 = , (5.17)
C
where k is Boltzmann’s constant, T is the mean temperature in [K], and
C is the heat capacity of the object in [J/K]. It can be shown that the
temperature-fluctuation noise PSD is 12,24–26
4kT 2
(ΔT )2n =  (5.18)
G2 + (ωC )2
146 Chapter 5

in [K2 /Hz], from which the corresponding noise flux Φ can be determined
as 24,25
Φ = 4kGT 2 (5.19)
in [K2 /Hz] for low frequencies, where k is the Boltzmann constant, T is
the object’s temperature, and Δ f is the noise equivalent bandwidth. The
meaning of G and C is discussed in Section 5.4.2.

5.3.7 Interface electronics noise

The detector electronics interface also contributes noise. 14,15,27 This noise
is inherent in the electronic components (shot noise or Johnson noise) or
can be a result of processing (amplitude digitization/sampling noise).

5.3.8 Noise considerations in imaging systems

A subtle cause of noise in an imaging system results from the spatial sam-
pling (aliasing) of small objects in an image — this effect is not a noise
source, but it manifests itself in the form of semi-random variations in sig-
nal strength. Section B.4.1 describes this effect in a case-study context. The
performance of imaging systems, including noise effects, are well docu-
mented. 28–30 Noise in imaging systems is best analyzed in the context of
the three-dimensional noise model. 21,28,31

5.3.9 Signal flux fluctuation noise

Optical flux generation is a Poisson process, which carries with the aver-
age value an inherent noise variance equal to the mean flux level. The shot
noise fluctuation is smaller at IR wavelengths but becomes significant at
very low flux levels for visual or UV light. Fewer photons are required in
the visual or UV spectral bands than in the IR spectral bands because the
photons have higher energy per photon at shorter wavelengths. The noise
inherent in the signal sets a minimum detectable signal level. The follow-
ing derivation determines this minimum signal level under the assumption
that there are no other signal or noise sources.
An optical flux of Φ p [q/s] results in a current of I = ηqΦ p in a detec-
tor. This current causes shot noise (see Section 5.3.3) with a magnitude
i2n = 2qIΔ f = 2q2 ηΦ p Δ f . (5.20)
Φe = hcΦ p /λ, hence the SNR becomes

ηλΦe
SNR = . (5.21)
2hcΔ f
Optical Detectors 147

The noise equivalent signal power (signal where SNR = 1) then follows as
2hcΔ f (1)2
NEP = . (5.22)
ηλ
Equation (5.22) states that for a given wavelength λ and noise bandwidth
Δ f , the noise equivalent power due to the signal fluctuation is determined by
the detector quantum efficiency η.
It can likewise be shown that, for a wideband photon detector, the
noise equivalent power due to signal fluctuation is
∞
2Δ f ν0 Meν ( Ts )dν
Φ= ∞ , (5.23)
η ν0 Meν (hνTs )dν
where Φ has units of [W], Δ f is the noise bandwidth, η is the detector
quantum efficiency (assumed spectrally constant), ν is frequency in [Hz],
Me is the thermal exitance in [W/m2 ] from a blackbody at a temperature
of Ts , k is Boltzmann’s constant, h is Planck’s constant, and c is the speed
of light. The integration starts at ν0 = c/λ0 , which is the lowest frequency
that the sensor can detect.

5.3.10 Background flux fluctuation noise

The background flux also causes noise in the detector signal. If the sensor
is limited by the noise caused by the background, the sensor is said to be
operating at ‘BLIP’ (background-limited performance). The principle used
to determine the BLIP limit is the same as was used for signal-limited
performance, except that in this case the limit is set by the noise caused by
the background.
For a monochromatic source at frequency ν, the minimum detectable
power (SNR = 1) of an open detector against a thermal radiation back-
ground is 19
 
 ∞
hν η (ν)2πν2 exp(hν/kTb )dν
NEP = 2AB , (5.24)
η ( ν) ν0 c2 [exp(hν/kTb ) − 1]2
where Δ f is the bandwidth, A is detector area, η (ν) is the detection quan-
tum efficiency, ν is the frequency, Tb is the background temperature, k is
Boltzmann’s constant, h is Planck’s constant, and c is the speed of light.

5.3.11 Detector noise equivalent power and detectivity

The electronic noise in a detector, normally expressed in terms of voltage


or current at the output of the detector or electronics, can be recalculated as
148 Chapter 5

an equivalent optical flux power in the detector. This optical noise power
does not have the same units as the electronic noise power (discussed in
Section 5.3) and is called the noise equivalent power (NEP). The NEP is the
optical signal, in [W] or [q/s], required to give the same electrical signal
as the noise signal. From its definition, spectral NEP is given by

id Δ f
NEP λ = , (5.25)


where id is the detector noise current density in [A/ Hz], Δ f is the noise
equivalent bandwidth in [Hz] , and Rλ is the spectral detector responsivity
in [A/W]. NEP is expressed in units of [W].
NEP is the noise for a particular detector device. Most noise source
contributions scale with detector area, and it is convenient to derive a
more-universal measure of detector performance specific detectivity (D ∗ ),

which is normalized with
√ respect to area √ and frequency. D is normally
given in units of [cm· Hz/W], not [m· Hz/W]. The detector’s D ∗ and
NEP are related by
 
∗ Δ f Ad Rλ Δ f A d
Dλ = =  , (5.26)
NEPλ id Δ f
where Ad is the detector area in [m2 ]. The D ∗ is a more-fundamental
property of the noise processes in detector materials and is independent of
the detector geometry. IR detectors are normally specified in terms of D ∗
at some background and source temperature. The noise of visual and NIR
detectors (e.g., silicon) are generally specified in terms of NEP.
NEP and detectivity can be expressed in spectral terms, NEPλ , or in
wideband terms, averaged over a spectral range. Similar to the spectral
values described above, the wideband values can be defined. Wideband
NEP is defined as

id Δ f
NEPeff = , (5.27)
Reff

where id is the detector noise current density in [A/ Hz], Δ f is the noise
equivalent bandwidth of the system, and Reff is the effective detector re-
sponsivity in [A/W], defined by (see Section 7.2.2)
∞
Rλ τa Mλ dλ
Reff = 0 ∞ , (5.28)
0
τa M λ dλ
where Rλ is the detector spectral responsivity, τa is the spectral transmit-
tance of the atmosphere or filters, and Mλ is the reference or calibration
source spectral exitance.
Optical Detectors 149

Wideband D ∗ and NEP are related by


 
∗ Δ f Ad Reff Δ f Ad
Deff = =  . (5.29)
NEPeff id Δ f

5.3.12 Combining power spectral densities

One or more different noise sources can be present in a sensor system. The
different noise sources are mostly uncorrelated, originating from different
components each with individual, statistically independent processes. It
can be shown that for uncorrelated sources, noise power adds linearly. The
total noise is then given by

N

ieff = ∑ i2n , (5.30)
0

where there are N noise sources in . The√ individual noise sources in this
equation can be either spectral noise [A Hz] or integrated wideband noise
[A].
A little care must be taken with noise expressed in terms of NEP. In
this case optical noise (NEP) corresponds to the signal itself, not the noise
it ‘mimics.’ Hence, when NEP from different noises are combined, the
NEP components must add in the square of NEP:

N

NEPeff = ∑ NEPn2 . (5.31)
0

D ∗ likewise also adds in quadrature,



N
1  1

∗ = ∑ ∗ 2
. (5.32)
Deff 0 ( Dn )

5.3.13 Noise equivalent bandwidth

Given a noise PSD S( f ) as input to a filter with voltage gain Av , the noise
equivalent bandwidth of the filter is defined as
 ∞
1
Δf = A2v ( f )S( f )d f , (5.33)
max( A2v S) 0
150 Chapter 5

where f is electrical frequency [Hz], Av ( f ) is the voltage frequency re-


sponse of the filter or electronics circuit (unitless), S( f ) is the noise PSD in
[V2 /Hz], and max( A2v S) is the maximum value of A2v ( f )S( f ). It is conve-
nient to define a constant kn as the ratio of noise equivalent bandwidth to
−3 dB bandwidth of the filter.
In the general case where the noise PSD is not flat, it follows that
the noise equivalent bandwidth of a filter is a function of not only the
filter but also the noise spectrum. If the noise PSD has a significant 1/ f
noise component, it means that the filter must suppress at least the lower
frequencies otherwise the maximum is not defined. See also Section 7.2.3.

5.3.14 Time-bandwidth product

The time-bandwidth product relates the temporal width of a pulse to the


required electrical bandwidth to achieve a specific design objective. Typ-
ical objectives include (1) the need to maintain the signal shape or (2) to
achieve the best SNR, irrespective of the resulting pulse shape. The nu-
merical value for the time-bandwidth product depends on the nature of
the signal, the nature of the noise, and the required objective. The value is
best determined uniquely for every different application. There are several
different definitions, often leading to confusion. The exact definition of the
pulse width and electronic bandwidth must therefore accompany the nu-
merical value. A commonly used definition for temporal pulse width is
the width of the pulse at half of its peak value. Filter bandwidths are
likewise often defined in terms of the full-width-half-maximum (FWHM)
bandwidth (also known as the −3 dB frequency bandwidth). There is no
general symbol defined for the time-bandwidth product; in this book the
symbol k f is used for this purpose.
One design approach matches the filter to the shape of the signal,
i.e., the filter frequency response is the Fourier transform of the signal
temporal shape. King 32 showed that for matched filters typical values for
the time-bandwidth product constant are approximately 1 for a sin( x)/x
pulse, 1.21 for a rectangular pulse, 1.64 for a half-sine pulse, and 1.77 for a
trapezium-shaped pulse. The time-bandwidth factor is an approximation,
depending on a number of assumptions and conventions. Its use here is
to obtain order-of-magnitude values. Typical values range from 0.5 to 2. It
may be convenient to consider the ‘generic’ value of this product to be π/2
because it sometimes simplifies calculations.
Optical Detectors 151

5.4 Thermal Detectors

5.4.1 Principle of operation

The scope of thermal detector technology includes a large number of dif-


ferent physical mechanisms, resulting in a wide variety of measurable
characteristics. Common to to all these mechanisms is the underlying prin-
ciple that the absorbed heat increases the temperature of the device (hence
the name thermal), which is observed in the change of some observable
property of the device. The two major groupings include thermoelectric
transducer effects (Seebeck effect and pyroelectric effect) and parametric
transducers where the device temperature modulates an electric signal (re-
sistive bolometers, Golay cell, and p-n diodes). 25,33
The Peltier–Seeback effect (discovered independently by Peltier, See-
beck and Thomson) is the bidirectional conversion between temperature
and voltage. This effect is exploited in thermocouple devices and Peltier
coolers used to cool detectors and mini-fridges. In a pyroelectric device,
temperature variations result in dielectric polarization changes in the ma-
terial. Pyroelectric detectors are commonly used in IR movement detectors
in security applications. In bolometer detectors the temperature change
results in a change in the device’s resistance. Nanotechnology bolometers
are used in low-cost thermal imaging applications.
In addition to the types mentioned earlier, there are several other ef-
fects also exploited in thermal detectors. 25 None of these effects require
the use of small-bandgap semiconductor materials, thereby not requiring
material cool-down for long-wavelength operation. Some of the thermal
detectors remain sensitive to temperature effects, such as pyroelectric de-
tectors that lose polarization above the Curie temperature.
A common requirement for all thermal detectors is that the sensing el-
ement must be thermally isolated from ambient temperature structures in
order to allow minute temperature changes in the sensing element. A con-
ceptual model of a thermal detector is shown in Figure 5.2. The detector-
element thermal balance is affected by three heat-flow paths: (a) the in-
cident flux from the object (target), (b) thermally radiated flux from the
detector element, and (c) heat conducted from the detector element to the
device’s substrate. Thermal detector performance optimization entails the
careful optimization of the heat balance equation.
Modern thermal detectors employ elements with very small thermal
mass (heat capacity) compared to the surface area of the device. For a given
amount of absorbed energy, the temperature rise is maximized. Likewise,
152 Chapter 5

Te, Le
Object Detector Environment
Gsor Gesr
Lo Ls Le
Flux from To Fos Ts Fes Te
the scene Ts, Ls ¥ o Cs s ¥ e
Ωo Gosr Gser
To, Lo Conduction through Pes
the mounting pillars Gesc
and the air gap below
the detector element G ¥
Te
¥
Device substrate
(a) (b)
Figure 5.2 Conceptual model for thermal detector: (a) physical layout and (b) flux flow
model.

because the radiating area is large compared to the thermal mass of the de-
tecting element, the detector quickly cools down once the incident energy
source is removed.

5.4.2 Thermal detector responsivity

In Figure 5.2 the detector element with heat capacity Cs in [J/K] is at a


temperature Ts and radiates with radiance Ls into a full spherical environ-
ment. The environment is at a temperature Te , radiates with radiance Le ,
and has an infinite heat capacity (it can source or sink an infinite amount
of energy without changing temperature). The target object is at a tem-
perature To , radiates with radiance Lo , and has an infinite heat capacity.
The detector element is fixed by mounting posts to the environment (the
readout electronics interface chip). The mounting posts conduct heat Pes
with conductance Gesc in [W/K] from the detector element to the envi-
ronment. The radiative flux exchange between the object and the detector
element is indicated by Φos . The radiative flux exchange between the detec-
tor element and the environment is indicated by Φes . The mounting posts
between the detector element and the environment have a collective heat
conductance Gesc . In this analysis the detector element is considered a thin
disk with area Ad . Under thermal equilibrium the net inflow of power on
the detector element is zero, Φos + Φes + Pes = 0, where inflowing power
is positive:
 ∞
0 = Ad Ωo (αsλ Loλ − αoλ Lsλ )dλ +
0
 ∞
Ad (2π − Ωo )(αsλ Leλ − αeλ Lsλ )dλ + Gesc ( Te − Ts ), (5.34)
0
Optical Detectors 153

where Ωo is the optics FOV, αe is the environment absorptance (emissiv-


ity), αs is the detector-element absorptance (emissivity), and αo is the ob-
ject’s absorptance (emissivity). Equation (5.34) applies to the total flux over
all wavelengths. Each of the three radiative sources has its own spectral
emissivity, requiring a detailed spectral radiometry analysis to find the so-
lution. The following analysis assumes constant spectral absorption and
emissivity by applying Kirchhoff’s law and setting all values equal to a
scalar value αo = αe = αs =  = o = e = s = . Next, perform the
spectral integrals, resulting in:
0 = Ad [Ωo ( Lo − Ls ) + (2π − Ωo )( Le − Ls )] + Gesc ( Te − Ts ). (5.35)
The two detector heat-loss mechanisms are radiation heat loss and heat
conduction to the environment. Consider these two cases separately. Case
1: Le = Ls , no heat loss via radiation to the environment. Then Ad Ωo Lo =
Gesc ( Ts − Te ), where the object radiance Lo causes a detector-element tem-
perature Ts . Case 2: Gesc = 0, isolated detector element, with no physical
contact with the environment. Then Ωo Lo = 2πLs − (2π − Ωo ) Le , where
object radiance Lo raises the detector-element temperature such that it ra-
diates at Ls . In general, both radiation and conduction to the environment
take place, the relative ratio of which depends on the heat capacity and
spectral emissivity values of the three components in this system.
By the Stefan–Boltzmann law, Equation (3.19), the flux radiated (or
lost) by a Lambertian object over all wavelengths is
Aσe T 4
ΦSB = , (5.36)
π
where A is the radiating surface area,  in this case is the effective hemi-
spherical emissivity, and T is the temperature. The temperature derivative
of the wideband flux is
dΦSB 4Aσe T 3
= = Gr ( T ) (5.37)
dT π
with units [W/K], which, by definition, is thermal conductance. Gr ( T ) can
be interpreted that a thermal radiator loses flux by a ‘conductance’ given
by 4Aσe T 3 /π — this is not a physical conductance, but it has the equivalent
effect. This ‘conductance’ varies with temperature.
The derivative of Equation (5.35) with respect to temperature is (keep-
ing in mind that dLe /dT = 0 because Te is constant)
dLo dLs
Ad Ωo = Ad 2π + Gesc , hence
dT dT
dΦo dΦs
 =  + Gesc = Gr ( Ts ) + Gesc , and finally
dT dT
ΔΦ
 = G = Gr ( Ts ) + Gesc , (5.38)
ΔT
154 Chapter 5

which defines a detector thermal conductance for small changes in Φo


incident on the detector.
In the closed system in Figure 5.2(a), the absorbed flux has two effects:
a change in the detector-element temperature Φdt = d(ΔT ) C as well as a
heat loss through conduction to the substrate P = GΔT. Thus, the tem-
perature of the detector element is given by the solution of the differential
equation 25
d(ΔT )
C + GΔT = (ΔΦ)(t), (5.39)
dt
where G is given by Equation (5.38). Assuming a sinusoidal input signal
ΔΦ(t) = ΔΦeiωt , the responsivity of the detector can be derived as being
of the general form 12
(ΔΦ)eiωt
ΔT = ΔT0 exp(t/τθ ) + , (5.40)
G + iωC
where τθ = C/G is the thermal time constant of the detector, and T0 is the
initial state of the detector. The transient exponential term becomes zero
for large t. The magnitude of ΔT then becomes
(ΔΦ)
ΔT = 
G2 + (ωC )2
(ΔΦ)
=  . (5.41)
G 1 + (ωτθ )2
The responsivity is then
  
ΔT id g
R= =  , (5.42)
ΔΦ ΔT G 1 + (ωτθ )2
where R is the responsivity in [A/W], and g depends on the conversion
mechanism for the type of thermal detector. Responsivity can be similarly
defined in terms of voltage output.
One technique to improve the frequency response of the detector is to
increase the detector conductance G. This would, however, result in a re-
duced responsivity, as shown in Equation (5.42). The better way to improve
the frequency response is to reduce the detector-element heat capacity C.
The heat capacity is C = cρV, where c = dC/dm is the detector mate-
rial specific heat in [J/(g·K)], ρ is the material density in [g/m3 ], and V is
the detector-element volume in [m3 ]. The material properties c and ρ are
fixed; the only design freedom is the volume. The detector area must be
maximized; therefore the detector-element thickness must be minimized
to reduce the element’s heat capacity — this can be done with no other
detrimental effect on performance. The only requirement on the detector-
element thickness is to achieve mechanical stability and rigidity.
Optical Detectors 155

Absorbing plate
suspended above
silicon substrate
Thin film Metal conductor
resistor

Column conductor
on silicon
substrate

Silicon
substrate
Bias current
Row conductor
on silicon
substrate
Bias current

Figure 5.3 Resistive bolometer construction (adapted 34).

5.4.3 Resistive bolometer

The resistive bolometer senses a change in electrical resistance of the de-


vice when the device temperature changes as a result of changing absorbed
radiant energy. The bolometer consists of a thin, absorbent, metallic or
semiconductor layer on a structure that is thermally isolated from the
substrate material (low conductance). Different materials, ranging from
metals to semiconductors, can be used for the resistive element in these
detectors. The structure is designed to maximize absorption and the asso-
ciated temperature increase but minimize heat loss to the substrate. Mi-
crobolometer detector elements are constructed using nanotechnology pro-
cesses, achieving elements with very good thermal performance. 25,33,35–37
Figure 5.3 shows the construction of one type of microbolometer detector,
used in modern staring array detectors. The readout electronics is located
underneath each detector element.
The metallic element bolometer has a positive temperature coefficient
of resistance (resistance increases when temperature increases). The tem-
perature coefficient of resistance α B in [K−1 ] is defined as
1 dR B
αB = , (5.43)
R B dTs
where R B is the bolometer resistance. The detector-element resistance is
given by the parametric equation
R B ( T ) = R B0 [1 + α B ( T − T0 )] , (5.44)
where R B0 is the resistance at temperature T0 .
156 Chapter 5

If the thermal energy kT of an electron exceeds the bandgap in a semi-


conductor material it could be excited into the conduction band (see Sec-
tion 5.5). The resulting holes in the valence band and electrons in the
conduction band then contribute to electrical conduction. Semiconductor
materials have negative temperature coefficients of resistance:

R B ( T ) = R0 T exp(b/T ),
n
(5.45)
where R0 is a reference resistance, T is the temperature, and n and b are
constants. Rogalski 12 states that n = −3/2, and b is a material constant.
Budzier 24 and others state that n = 0 and b = Eg /(2k), where Eg is the
material bandgap and k is Boltzmann’s constant (see Section 5.5). Note
that Eg itself is also a function of temperature. The temperature coefficient
of resistance is then
b
αB = − 2 . (5.46)
T
The output from the bolometer is proportional to the input flux, and
the device is therefore able to respond to static radiant flux input condi-
tions, i.e., when viewing a scene with no moving objects. This means that
thermal cameras with bolometer devices do not need optical choppers to
operate. Nonuniformity correction may require that the detector observes
one or more uniform temperature reference sources during calibration.
The responsivity of the bolometer detector element is given by 12,24,25
α I R
R = B b b , (5.47)
G 1 + (ωτθ )2
where α B is the temperature coefficient of resistance,  is the effective
wideband surface absorption (emissivity), Ib is the constant bias current
through the device, Rb is the device electrical resistance, and G is the heat
conduction coefficient, given by Equation (5.38).
Modern microbolometer structures are based on the ‘monolithic’ de-
sign concept by Honeywell, making use of silicon micro-machining. 25,35
The sensing element is a 0.5–3-µm thin membrane of silicon nitride (Si3 N4 ),
supported by two pillar structures or posts, covered with a thin film of the
resistive material, usually a material such as vanadium oxide (VOx ). The
posts perform a structural function to support the membrane in free space
but also act as the substrate for a conducting metal layer that connects the
resistive element on the membrane to the underlying read-out electronics.
The silicon-nitride membrane has gold on the back (bottom) side to reflect
energy from the substrate back into the device.
The whole structure is formed by first depositing a layer of silicon
dioxide on the silicon chip. The membrane and posts are then grown on
Optical Detectors 157

top of the SiO2 , after which the SiO2 is etched away, leaving the membrane
only, supported by the posts. The vanadium oxide film is formed by sput-
tering a layer of mixed oxides on the membrane. The mixed-oxide layer
provides a more-stable detector than would pure oxides. The mixed-oxide
temperature coefficient of resistance α is of the order of −2 to −4 %/◦ C.
Other materials can also be used, and in one case, a forward-biased p-n
diode is used as the temperature sensor.
Microbolometers are available in two-dimensional arrays. The indi-
vidual element sizes can be as small as 17 µm, but common sizes range
from 25–50 µm. Some devices employ a two-story construction to increase
the fill factor. The array structure is built on top of a silicon read-out elec-
tronics interface chip (ROIC).
Microbolometers have achieved relatively high performance with ther-
mal imaging cameras demonstrating noise equivalent temperature differ-
ence of 0.1 K using f /1 optics. 35 Best performance is achieved if the de-
tector is encapsulated in a vacuum (pressure less than 1 mbar) to reduce
thermal conductance through the air, in the small gap between the bottom
of the detector element and the substrate underneath.
The bolometer device has the following noise sources: 35 (1) Johnson
noise (see Section 5.3.2), (2) 1/ f noise (see Section 5.3.5), (3) temperature-
fluctuation noise (see Section 5.3.6), and (4) read-out electronics interface
circuit noise (see Section 5.3.7). These noise sources are uncorrelated and
add in quadrature (see Section 5.3.12).

5.4.4 Pyroelectric detector

The pyroelectric effect 25,33,38 is found in many different materials, but fer-
roelectric materials are more commonly used. The ferroelectric detector
senses the changes in the electrical polarization of the material resulting
from temperature changes. The output from the ferroelectric detector is
proportional to the rate of change in input flux — if the scene flux does
not change, the signal disappears. Sensors with pyroelectric detectors rely
on movement in the scene or require a device to ‘chop’ the incident signal,
alternating the scene flux with a reference flux.
The ferroelectric effect is found in materials such as barium stron-
tium titanate, strontium barium niobate, lithium tantalate, and lead titan-
ite. These materials have spontaneous internal polarization, measured as a
voltage on electrodes placed on opposite sides of the bulk of the material;
this forms a capacitor with the sensor material as the dielectric. At a con-
stant temperature, the polarization is equalized by mobile charges on the
158 Chapter 5

Top electrode
Pyroelectric
element
Bottom electrode

Column conductor
on silicon
substrate

Silicon
substrate
Bias current
Row conductor Post with
on silicon contact
substrate
Bias current

Figure 5.4 Pyroelectric detector structure. 25

surface of the material. As the bulk temperature changes, the polarization


of the material changes, resulting in a temporary change in charges at the
surface and leads to a minute current flowing to restore the charges. The
current flow taking place across the detector’s capacitive structure results
in a voltage change. The voltage changes are only observed when the tem-
perature changes, hence the element temperature must be time-modulated
to produce an output from a static scene. This can be done by chopping
the signal mechanically or by moving the image across the detector.
The ferroelectric effect is maximum just below the Curie temperature,
and the detector is often operated at this point. Care must be taken not to
exceed the Curie temperature, as the device will lose its polarization above
this temperature. Losing polarization is a temporary effect, but the device
must be re-poled (application of a high field strength at high temperature).
One approach to pyroelectric detector layout is shown in Figure 5.4.
The structure employs micro-machined silicon technology to construct an
isolated element, supported by two posts. The dielectric material spans the
full detector area, with one electrode on the top side, and the other elec-
trode on the bottom side. This forms the device’s capacitor. This layout is
similar to the microbolometer design, and the thermal design considera-
tions are the same.
The pyroelectric detector can be operated in short-circuit (current)
mode or open-circuit (voltage) mode. The short-circuit current from the
detector is 24
d(ΔT )
i = pAs , (5.48)
dt
where the current is proportional to the time derivative of the tempera-
Optical Detectors 159

ture change, p = dP/dT is the pyroelectric coefficient in [C/(m2 ·K)], P is


the polarization, As is the area of the detector in [m2 ], and (ΔT ) is the
temperature change in the material [Equation (5.39)]. The current-mode
responsivity of the pyroelectric detector is then given by
pAd ωR p
R =  , (5.49)
G 2(1 + ω 2 τRC
2 )(1 + ω 2 τ 2 )
θ

where  is the surface absorption (emissivity), p is the pyroelectric coeffi-


cient, Ad is the area of the detector, R p is the parallel resistance value of
the loss (element) resistance and the pre-amplifier input resistance, G is
the heat conduction coefficient, τRC is the electrical time constant, and τθ
is the thermal response time.
Note that the device has a bandpass response, with a zero at the origin
and two poles (electrical and thermal time constants). The term ‘zero’ is
a differentiator (an ‘s’ operator in Laplace terminology), stemming from
the differentiator in Equation (5.48) — this means that the detector has
zero output at zero frequency. The first ‘pole’ at 1/τθ is the thermal time
constant [Equation (5.39)], and the second pole at 1/τRC results from the
RC electrical time constant of the detector capacitance and the load resistor.
The pyroelectric detector has three noise sources: (1) Johnson noise,
that is spectrally altered by the electronics RC time constant of the device
(see Section 5.3.2), (2) temperature-fluctuation noise (see Section 5.3.6), and
(3) read-out electronics interface circuit noise (see Section 5.3.7). These
noise sources are uncorrelated and add in quadrature (see Section 5.3.12).

5.4.5 Thermoelectric detector

The thermoelectric detector senses the temperature change as a voltage


change. The thermocouple is an example of the thermoelectric detector. 39
Two dissimilar metals or semiconductors, or a metal and a semiconduc-
tor, must be electrically and mechanically joined, such as by soldering. If
the two junctions so formed are at different temperatures, a voltage will
be generated according to the Seebeck effect. For small temperature dif-
ferences the generated voltage is linearly proportional to the temperature
difference. The Seebeck effect in semiconductor materials is more com-
plex than in metals because the semiconductor behavior also influences
the effect. The conceptual design of a thermoelectric detector is shown in
Figure 5.5.
The thermoelectric detector has lower responsivity than the pyroelec-
tric and bolometer detectors, but it does not require a bias current or op-
tical signal chopping to form an image. Furthermore, the thermoelectric
160 Chapter 5

Detecting Reference
Metal A junction Metal B junction Metal A

Electrical Cold TE Electrical


contact Metal A Hot TE junction Metal B junction Metal A contact

Silicon Silicon nitrate bridge Etched well

Figure 5.5 Thermoelectrical device layout (adapted 25).

detector has wider dynamic range than other detector techniques. In order
to increase the output signal, more than one junction pair can be combined
on one optical pixel, which is then called a thermopile.
When a metal rod is heated at the one end and cooled at the other end,
the electrons at the hot end have more energy kT and thus greater velocity
than at the cold end. There is a net diffusion of electrons from the hot end
to the cold end, which leaves the positive ions in the hot region. This diffu-
sion continues until the potential field so generated prevents the diffusion
of more electrons. 40 A voltage is generated between the hot and cold ends
of the rod. The ratio of the potential generated across the metal to a unit
temperature difference is called the Seebeck coefficient S = ΔV/ΔT. By
convention the sign of S represents the potential of the cold end relative
to the hot end, but the cold end has a negative potential relative to the hot
end. The Seebeck coefficient in a metal can be negative (Na, K, Al, Mg, Pb,
Pd, Pt) or positive (Mo, Li, Cu, Ag, Au), depending on the diffusion of the
electrons. The Seebeck coefficient in a p-type semiconductor is positive be-
cause in a p-type material the holes are the majority carriers. The Seebeck
coefficient depends on the material temperature S( T ). It can be shown that
the Seebeck coefficient for metals is given by (after some approximation) 40

π2 k2 T
S≈− , (5.50)
3qEF (0)
where k is Boltzmann’s constant, T is the temperature in [K], q is the charge
on an electron, and EF (0) is the Fermi energy at 0 K. The voltage across a
Optical Detectors 161

conductor with its end temperatures at T0 and T1 is determined by


 T1
ΔV = S( T )dT. (5.51)
T0

Measuring the potential across a copper bar with electrical fly leads
made of copper — and connected to the hot and cold end of the bar —
will yield a zero voltage. The copper fly leads form the same potential
difference, opposing the potential in the bar under test resulting in a zero
voltmeter reading. If, however, the fly leads are made of another metal, the
difference in two Seebeck potentials will be measured on the voltmeter:
 T1
ΔVAB = (SA − SB )dT. (5.52)
T0

Combining Equations (5.50) and (5.52) yields the metal thermocouple equa-
tion
VAB = aΔT + b(ΔT )2 , (5.53)
where a and b are the thermocouple coefficients, and ΔT is the temperature
with respect to the reference temperature of 273.16 K.
The responsivity of the thermoelectric detector is given by
N (S1 − S2 )
R =  , (5.54)
G 1 + (ωτθ )2
where  is the surface absorption (emissivity), N is the number of junction
pairs per pixel, S1 and S2 are the Seebeck coefficients of the two dissim-
ilar materials, and G is the effective heat conduction coefficient [Equa-
tion (5.38)].
The thermoelectric detector has three noise sources: (1) Johnson noise
(see Section 5.3.2), (2) temperature-fluctuation noise (see Section 5.3.6), and
(3) read-out electronics interface circuit noise (see Section 5.3.7). However,
the responsivity is relatively low, so the Johnson noise dominates.

5.4.6 Photon-noise-limited operation

Photon-noise-limited operation in thermal detectors occurs when the noise


in the background and target photon flux is the limiting noise. This noise
source is not in the detector or its electronics but is inherent in the signal.
For a detector with unlimited spectral response (i.e., over all wavelengths)
the photon-noise-limited D ∗ can be shown 12,25 to be

D∗ =  , (5.55)
8σe k( Td5 + Te5 )
162 Chapter 5

1013
Tdetector 0 K

1012
D* [cm·Hz½/W]
Tdetector 77 K
Tdetector 195 K
1011 Tdetector 290 K

1010

109
0 60 120 180 240 300 360 420 480 540 600
Environmental temperature [K]

Figure 5.6 Photon-noise-limited D ∗ for thermal detectors with  = 1 and 2π sr field of view
(adapted 25).

where  is the surface absorption/emissivity, σe is the Stefan–Boltzmann


constant, k is Boltzmann’s constant, Td is the detector-element temperature,
and Te is the environmental temperature. The photon-noise-limited D ∗ is
shown graphically in Figure 5.6.
Figure 5.6 indicates that a detector at room temperature,
√ viewing a
280-K scene, can, at best, have a D ∗ value of 2 × 1010 cm· Hz/W. If either
the detector or the background is reduced
√ to absolute zero, the D ∗ value
is still no better than 2.8 × 10 cm· Hz/W. This derivation applies to a
10

detector with a wide spectral band — if a narrow spectral band is used,


the D ∗ will improve (see Section 9.7). These D ∗ values apply to a detector
FOV of 2π sr. If a cold shield is used to limit the detector’s view of the
ambient background, the D ∗ will improve by 1/ sin θ, where θ is the half-
apex angle of the cold shield. In order for the cold shield to be effective,
the shield must be considerably below room temperature.
Practical detectors may limit the spectral bandwidth with cold filters.
The effect of the cold filter (Tfilter  Te ) on D ∗ can be modeled by 12,25

D∗ =  , (5.56)
8σe k( Td5 + Fλ1 ,λ2 )

where
 λ2 2 3
h c ex
Fλ1 ,λ2 = 2 dλ, (5.57)
λ1 λ6 ( e x − 1) 2
where x = hc/(λkTe ), Te is the environment temperature, λ1 is the filter
short-cutoff wavelength, and λ2 is the filter long-cutoff wavelength.
For a detector employed in a focal plane array the photon-noise-limited
Optical Detectors 163

NETD (noise limit imposed by the environment on the detector) is 25



8F#2 2kσe Δ f ( Td5 + Te5 )
NETD =  , (5.58)
τs ffill Ad ∂M∂T
where F# is the detector cold shield f -number, Δ f is the equivalent noise
bandwidth, τs is the optics transmittance, ffill is the focal plane array fill
factor,  is the spectrally flat device absorption (emissivity), ∂M
∂T is the tem-
perature derivative of Planck’s law, and Ad is the area of the detector.

5.4.7 Temperature-fluctuation-noise-limited operation

Temperature-fluctuation-noise-limited operation occurs when the device


temperature variations limits noise performance (see Section 5.3.6). The
temperature-fluctuation-noise-limited D ∗ of a thermal detector with wide
spectral response can be shown 12,25 to be

∗ 2 A d
D = , (5.59)
4kTd2 G
where  is the spectrally flat device absorption (emissivity), Ad is the area
of the detector, k is Boltzmann’s constant, Td is the device temperature,
and G is the device thermal conductance between the sensing element and
the environment.
If the detector is employed in a focal plane array, the temperature-
fluctuation-noise-limited NETD (noise limit imposed by temperature fluc-
tuations in the device) can be shown 25 to be

8F2 T kGΔ f
NETD = , (5.60)
τs ffill Ad  ∂M
∂T
where F is detector cold shield f -number, T is the device temperature, k
is Boltzmann’s constant, G is the device thermal conductance between the
sensing element and the environment, Δ f is the equivalent noise band-
width, τs is the optics transmittance, ffill is the focal plane array fill factor,
Ad is the area of the detector, and  is the device emissivity.

5.5 Properties of Crystalline Materials

To understand the photon-detection process, it is necessary to understand


the quantum effects in the interaction between semiconductor structures
and incident radiance fields. This section presents the basic theoretical as-
pects regarding semiconductor materials and their physical characteristics
that allow the absorption of light.
164 Chapter 5

5.5.1 Crystalline structure

Modern electronics technology employs crystalline semiconductor materi-


als as a primary means to achieve its objectives. A crystal is a structured
lattice arrangement of identical building blocks, where each block is an
atom or a group of atoms. Crystalline structures are found in nature, but
advances in crystal growth techniques allow production of near-perfect ar-
tificial crystalline structures designed to obtain specific properties and per-
formance. 1 These crystals can be elemental, such as silicon or germanium,
or they can be alloys, such as InSb or HgCdTe. Some alloys support com-
position ratios; for example, the Hg1− x Cdx Te ternary alloy can be tuned to
different bandgap values by setting the ratio of HgTe to CdTe. 41 By setting
the ratio, the detector’s spectral range can be set to SWIR, MWIR, LWIR,
and even beyond to 14–30 µm.
The structure of a crystalline material is defined by two important
concepts, the lattice and the basis. 1 The lattice is a periodic geometrical
construction of points in space where all of the points have exactly the
same neighboring environment. The basis is a fundamental building block
of atoms attached to each lattice point. This location of the atoms in the
lattice structure provides the crystal with its unique physical, optical, and
electronic properties.
Any lattice point R can be obtained from any other lattice point R by
the translation:

R  = R + m 1 a1 + m 2 a2 + m 3 a3 , (5.61)

where a1 , a2 , and a3 are three primitive basis vectors, and m1 , m2 , and


m3 are integers. This construction is known as the Bravais lattice, 42 and
it allows the generation of any lattice by all possible combinations of the
integers m1 , m2 , m3 . The crystal structure remains invariant under trans-
lation through any vector that is the sum of integral multiples of the basis
vectors. 43 There are 14 unique Bravais lattices in three-dimensional space.
They are classified according to the relationship between the three primi-
tive vectors and the angles (α, β, and γ) between them. A detailed discus-
sion about crystalline structure can be found in Singh. 1
The periodicity in a lattice allows the definition of symmetry via a set
of operations around a point: rotation, reflection, and inversion. The sym-
metry plays an important role in the electronic properties of the crystals
— many physical properties of semiconductors are tied to the presence or
absence of symmetry. For example, in the diamond structure (Si, Ge, C,
etc.) inversion symmetry is present, whereas in the zinc blend structure
(GaAs, AlAs, InAs, etc.) it is absent. 1
Optical Detectors 165

Orbital
(number of electrons)
2N states Electron band
3s (1N)
Electron energy

6N states 2p (6N)

Forbidden
2N states band
2s (2N)

2N states 1s (2N)

Bound atoms in solid Free atoms in gas


a
Atomic spacing

Figure 5.7 Band creation due to the decreasing distance between atoms for sodium
(adapted 45,46 ).

5.5.2 Occupation of electrons in energy bands

The probability of finding an electron around an atom is given by the


atomic orbitals (electronic state), a set of mathematical functions describing
the behavior of one or a pair of electrons. The orbitals are designated by 1s,
2s, 2p, etc., where each orbital represents a discrete, quantized energy level.
Each orbital can contain at most two electrons, of opposite spin, by Pauli’s
exclusion principle. The atom’s energy state is defined by its electronic
distribution in the available orbital states. Orbitals represent increasing
energy levels from the ‘inner’ to ‘outer’ orbitals (1s, to 2s, to 2p, to ...). 44
When atoms are bound in a crystal lattice, the energy levels of each
atom are perturbed by the presence of other neighboring atoms, and by
the lattice as a whole. Figure 5.7 shows the electronic energy states of the
sodium atom as a function of interatomic spacing. 45 For large interatomic
spacing (free atoms in gas), there is no perturbation from the neighboring
atoms. As the spacing decreases, Pauli’s exclusion principle prohibits elec-
trons sharing the same energy orbitals. As a result, the orbital energy levels
start to differ slightly between adjacent atoms — the fixed energy level for
free atoms now becomes a range or ‘band’ of possible energy levels. 46 If a
large number of atoms are involved, the splitting of energy levels will re-
sult in a quasi-continuous spread (a probability function), hence the notion
of an energy band. Electrons can only occupy energy states within a band,
never between the bands. The energy space between the bands is called
the forbidden band or bandgap. This widening in the electronic energy
probability distribution results from the solution of the orbital electronic
wave function in the atomic-lattice context instead of the isolated-atom
166 Chapter 5

context. 45 The width or spread of the band depends on the overlap of the
individual atoms’ energy orbitals — it affects the outer or higher energy
orbitals (e.g., 3s) first before affecting the inner orbitals (1s). 46,47 Figure 5.7
indicates four orbital energy bands at an atomic spacing of a. Note that for
the 1s state the energy probability distribution is very narrow, whereas for
the 3s band the probability distribution widens considerably.
The bandgap or atomic energy levels are commonly defined in units
of electron volts [eV], the energy gained or lost by the charge of an electron
moving across an electric field of one volt. The charge on one electron is q
C. One volt is 1 J/C, hence one eV is quantity(q) J.
An important parameter in this theory is the Fermi level: electrons
fill the orbital states from the lowest energy level up to a higher energy
level E f , called the Fermi Level. The probability of finding a thermally
excited occupied energy state with an electron in an energy range E to
E + dE (under thermodynamic equilibrium) is given by the Fermi–Dirac
distribution:
1
f ( E) = , (5.62)
1 + e( E− EF )/(kT )
where T is the temperature in [K], EF is the Fermi level in [J], and k is
the Boltzmann constant. Note that kT is the thermal energy of an electron
associated with the temperature T. The Fermi–Dirac distribution satisfies
Pauli’s exclusion principle and hence describes the distribution at all tem-
peratures. 42 At absolute zero temperature (0 K), all of the energy states
are filled from the lowest state, with no vacant or unfilled states, up to the
highest state defined by the Fermi level EF . At nonzero temperatures, the
filled states spread around the Fermi level, according to the Fermi–Dirac
distribution. At higher temperatures, the spread is broader, as shown in
Figure 5.8.

5.5.3 Electron density in energy bands

The classical view of an electron as a particle is not sufficient to describe


the behavior of electrons in solid crystals. This analysis employs the wave
model of the electron. In order to determine the number of electrons in
an energy band, the number of allowable electron waves in the crystal
must be determined. The number of electron waves in the crystal is deter-
mined by the number of energy states per unit energy, given by the density
of states D ( E). 10,45,46,48–50 The ‘density of states’ methodology developed
here is also used in the derivation of thermal noise and Planck’s law (see
Sections 3.1 and 5.3.2). 23
Optical Detectors 167

1.0

Spread represents
Occupancy probability

thermal energy kT

0K

0.5
77 K
300 K

0.0
−0.20 −0.16 −0.12 −0.08 −0.04 0.00 0.04 0.08 0.12 0.16 0.20
Energy [eV]

Figure 5.8 Fermi–Dirac energy probability distribution for thermally excited electrons,
around the Fermi (50%) energy level.

The wave function for a free electron can be written as 10

ψ(k, r) = eikr = cos(kr) + i sin(kr), (5.63)

where k = (kx + ky + kz ) is the wave vector. The analysis is based on


the principle that the electron wave must be one of the possible set of
standing waves in the crystal — essentially a boundary constraint because
the electron wave must have zero intensity at the crystal edges. Think
of the electron wave vector ki as a resonant standing wave in the crystal,
with twice the longest wavelength (lowest energy state) equal to the total
length of the crystal L. This is a standing wave with nodes at each end
of the crystal with a peak in the center of the crystal. The crystal size L
corresponds to half the wavelength or π phase along the wave. Higher-
order modes must be integer multiples of this half wave kx = n x π/L x
along the x direction, and likewise along y and z.
In the simplified model of a free electron, the energy for a free electron
is given by the allowable standing-wave modes supported in a crystal cube
with sides of length L: 48

h̄2 k2 h̄2
E = = (k2 + k2y + k2z )
2m∗ 2m∗ x
h2
= (n2 + n2y + n2z ) = E0 k F , (5.64)
8m∗ L2 x
where k is the wave vector with its three Cartesian coordinates, m∗ is the
effective electron mass, and h̄ = h/(2π), where h is the Planck constant.
This equation is an approximation 43 of the electron’s energy near the band
edges EC and EV , acceptable within the primary concern of this analysis.
The value E0 = h2 /(8m∗ L2 ) is the lowest energy state in the crystal (longest
168 Chapter 5

wavelength). k2F = (n2x + n2y + n2z ), where ni are the indices of the reciprocal
lattice points (1/distance) inside the sphere with radius k F . This sphere
contains all of the modes associated with indices less than ni — that is,
all of the standing waves with longer wavelengths (lower energy). The
number of modes that can be sustained in a sphere with radius k F is given
by all of the modes starting with the shortest wavelength, then including
all of the modes with longer wavelengths up to wavelength k F .
The number of electrons that can be accommodated in states with
energy E or less is 48
    
1 4πk3F π E 3/2
N ( E) = 2 = , (5.65)
8 3 3 E0
where the factor of 2 allows for the positive and negative electron spin
for each state, and 4πk3F /3 is the volume of the sphere. The factor of 8
is present because only the positive integers are considered. It can then
be shown 45,48 that the density of electrons with energy less than E per unit
volume is
 ∗ 3/2  
1 2m π 8m∗ 3/2 1/2
D ( E) = E 1/2
= E . (5.66)
2π2 h̄2 2 h2
Having N electrons in the energy band per unit of volume, the condition
determines that EF is given by 49
 EF
D ( E)dE = N. (5.67)
0

The Fermi level is obtained by solving the integral:


   2  
h̄2 2
2/3 h 3N 2/3
EF = 3π N = . (5.68)
2m∗ 8m∗ π

The band that is normally fully filled with electrons at 0 K is called the
valence band, whereas the upper unfilled band is called the conduction
band.
From Equation (5.66) the density of electrons per unit volume in the
conduction and valence bands becomes 10
 
π 8m∗ 3/2
DC ( E ) = ( E − EC )1/2 for ( E > EC ) (5.69)
2 h2
 
π 8m∗ 3/2
DV ( E ) = ( EV − E)1/2 for ( E < EV ). (5.70)
2 h2
Figure 5.9(a) shows the relationship between electron energy as a function
of wavenumber along one dimension. The figure shows a wave vector
Optical Detectors 169

E E

Conduction band DC(E) density of


states with
energy below E
Bandgap

DV(E) density of
states with
Valence band energy above E

-p p k Density of states
Wave vector (electrons per unit volume)
(a) (b)

Figure 5.9 Crystalline material: (a) relationship between electron energy and wave number
ki , and (b) density of states for a free electron in a semiconductor.

for a single atom in the crystal. These wave vectors repeat exactly along
the lattice positions, a requirement for stable electronic wave solutions.
Figure 5.9(b) shows the density of electron states in the conduction and
valence bands of a semiconductor. Both of these functions are parabolic in
shape.

5.5.4 Semiconductor band structure

Electron dynamics in semiconductors is modeled by the Schrödinger equa-


tion, the solution of which describes the band structure of the material.
Given the large number of atoms in a material, the electrons are subjected
to a very complex potential profile. Atoms vibrate, adding a temporal vari-
ation to the atomic potential profile. Finally, electrons interact with each
other. Solving the Schrödinger equation is simplified in crystalline mate-
rials because the fixed spatial structure in the crystal presents a periodic
potential background, satisfying the Bloch theorem. 10,51
The Schrödinger equation is given by 1
 
−h̄2 2
∇ + U (r) ψ(r) = Eψ(r), (5.71)
2m0

where m0 is the mass of the electron, U (r) is the potential seen by the
electron, ψ(r) is the wave function describing the electronic state, and E
is energy. The first term represents the free electron’s wave function. The
second term represents the dependency on all of the other electrons and
atoms in the material. For crystalline materials, the potential U (r) has
170 Chapter 5

a lattice periodicity R, hence U (r) = U (r + R). Because of the period-


icity shown by U (r), the wave function will also be periodic 51 |ψ(r)|2 =
|ψ(r + R)|2 . The physical meaning of ψ is that |ψ|2 dx dy dz represents the
probability of finding an electron in the volume element dx dy dz in the
vicinity of position ( x, y, z). 52 It can be shown 1 that the wave function is
spread across the entire sample and has equal probability (ψ∗ ψ) at every
point in the lattice. Note that ψ(r) is a probability, hence the probability is
periodic over the lattice. 1
The electronic wave function is given by the Bloch functions 1,10
ψ(k, r) = un (k, r)eikr , (5.72)
a lattice-periodic set of plane waves with a space-dependent amplitude
factor un (k, r). This important result shows that electrons in the lattice can
be described by means of a wave function vector k and energy E(k), as
shown in Figure 5.9(a).

5.5.5 Conductors, semiconductors, and insulators

Materials are classified as electrical insulators, semiconductors, or conduc-


tors, depending on the width of the forbidden energy gap between two
outer energy bands (see Figure 5.10). Insulators have a large bandgap
(6 eV in the case of diamond) that prevents thermionic excitation at envi-
ronmental temperatures, and consequently no electrons are present in the
conduction band. The valence band and conduction band overlap in con-
ductors, allowing valence electrons to move freely through the material,
thus, resulting in electrical conductivity. Semiconductor materials can act
as insulators or conductors, depending on the temperature of the material.
At low temperature, the thermally excited electrons [see Equation (5.62)
and Figure 5.8] do not have sufficient energy to be excited to the conduc-
tion band (insulator). At higher temperatures, the thermal excitation may
excite a large number of electrons to the conduction band (conductor). Ex-
amples of semiconductor materials are Si, Ge, GaAs, InSb, and HgCdTe,
all of which are commonly used IR-detector materials.
If a crystalline material is tetravalent (each atom has four neighbors
as in silicon), the valence band is filled and occupied by four electrons.
These electrons are closely bound to the atom (in its physical position in
the lattice) and do not contribute to conduction. At low temperatures,
the higher energy conduction band is empty. If the temperature is raised
and the electrons in the valence band gain sufficient energy to cross the
forbidden energy gap, these electrons are excited to the conduction band.
The electrons in the conduction band are not closely bound to the atom;
they belong to the crystal as a whole. These electrons can therefore move
Optical Detectors 171

Eg kT Eg > kT
(»6 eV for diamond) (1.12 eV for silicon)

Insulator Semiconductor Conductor

Figure 5.10 Energy bands for isolators, semiconductors, and conductors.

about freely and contribute to the conduction of electric current. Both


electrons e− and holes h+ (the absence of an electron — an empty energy
level) contribute to conduction in a semiconductor.

5.5.6 Intrinsic and extrinsic semiconductor materials

In intrinsic (pure) semiconductor materials there are no impurities, and for


every atom in the lattice the number of electrons in the conduction band
exactly matches the number of holes in the valence band. If the semicon-
ductor is doped with a small and controlled amount of a foreign atomic
element (impurities), the material is called an extrinsic semiconductor. The
dopant atoms replace the host atoms in the crystal lattice with consider-
able effect on the surrounding atoms’ bonds and electronic behavior, and
therefore the material properties. The impurities added to tetravalent crys-
tal semiconductors (e.g., silicon) can have five valence electrons (donors)
or three valence electrons (acceptors). When these atoms are located in the
tetravalent crystal lattice, the number of electrons and holes in the valance
band do not match the surrounding tetravalent atoms.
Pentavalent donor impurities (phosphorus) add additional electron-
hole pairs to the crystal. Four of the phosphorus atom’s electrons are tied
up in the covalent bonds to neighboring silicon atoms. The fifth electron is
free to move in the conduction band, whereas the fifth hole is bound to the
pentavalent atom’s location in the lattice [see Figure 5.11(b)]. The donor
doped material is called n-type material. In n-type material, the mobile
electrons are the majority carriers, and the bound holes are the minority
carriers.
Trivalent acceptor impurities (e.g., boron) remove electron-hole pairs
172 Chapter 5

Si Si Si Si Si Si Si Si Si Si Si Si
Si Si Si Si Si Si P+ Si Si Si B Si
Si Si Si Si Si Si Si Si Si Si Si Si
+
Si Si Si Si Si Si Si Si Si Si Si Si

(a) (b) (c)


Figure 5.11 Silicon semiconductor lattice at 0 K: (a) intrinsic, (b) extrinsic with donor doping
(n-type), and (c) extrinsic with acceptor doping (p-type). 49,50

from the crystal. The covalent bonds with the neighboring silicon atoms
require four electrons, whereas the trivalent atom can only provide three.
A valence-band electron becomes ‘caught’ in this fourth covalent bond —
it is not available for excitation to the conduction band. The silicon atom
providing this trapped electron now has one less electron (a hole). The hole
(absence of an electron) in the valence band can freely move through the
material because an electron fills the hole but in the process it creates a new
hole in another atom [see Figure 5.11(c)]. The acceptor-doped material is
called p-type material. In p-type material, the mobile holes are the majority
carriers, and the bound electrons are the minority carriers.
Only a small amount of energy (0.01–0.05 eV) is required to elevate
the ‘extra’ electron from the donor’s energy level ED into the conduction
band [see Figure 5.12(b)]. Similarly, a small amount of energy will excite
an electron from the valence band into the acceptor level E A (the missing
electron in the covalent bond) [see Figure 5.12(c)]. The impurity atoms
therefore introduce ‘allowable’ energy states in the otherwise forbidden
bandgap.
Donor doping introduces electrons in the conduction band but cap-
tures the holes in fixed lattice locations. Acceptor doping introduces holes
in the valence band but captures the electrons in fixed lattice locations. The
free electrons and holes generated by doping do not have their free coun-
terparts available for conduction, as in a metal where both free electrons
and holes are present. Electron-hole pairs formed by thermal excitation
support conduction by free electrons in the conduction band and by free
holes in the valence band.
Figure 5.12 shows electron excitation by photon absorption, but note
that thermal excitation has the same effect in the conduction band. Elec-
tron excitation by thermal means or by photon means are indiscernible
once the electron is excited. Thermally excited electrons normally interfere
with the device’s operation — they must be minimized. For this reason,
Optical Detectors 173

Conduction Fermi Fermi Fermi


band Dirac Dirac Dirac
e e e e e
EC EC EC
ED EF
hν +
h n type
donor hν
EF hν
p type
EA e acceptor
EF
EV EV EV
+ + + + +
h h h h h

Valence
band
0 0.5 1 0 0.5 1 0 0.5 1
(a) (b) (c)

Figure 5.12 Energy bands and Fermi–Dirac distributions for: (a) intrinsic semiconductor,
(b) n-type extrinsic semiconductor, and (c) p-type extrinsic semiconductor.

semiconductor detectors with small bandgaps are cooled down to reduce


thermal excitation.
It can be shown that for intrinsic material, the Fermi energy level is
more or less halfway between the valence and the conduction bands: 48
 ∗
Eg 3kT mh
EF = + loge , (5.73)
2 4 me∗
where Eg is the bandgap, k is the Boltzmann constant, T is the tempera-
ture, m∗h is the effective hole mass, and me∗ is the effective electron mass.
The EF deviation from the exact center depends on the temperature and
the values of the effective hole and electron masses. The Fermi level for
extrinsic materials is shifted, depending on the nature of the doping 49,50
(see Figure 5.12).
In intrinsic detectors, the donor concentration nd and acceptor con-
centration n a are relatively low compared to intrinsic concentrations ni .
Whereas these impurity levels are too small to have much significance on
the optical properties of the device, they do affect the electronic device
characteristics of the detector. Examples of intrinsic detectors are silicon
(Si), germanium (Ge), indium-antimonide (InSb), and mercury-cadmium-
telluride (HgCdTe). The detector properties of the intrinsic detector is
determined by the energy bandgap between the conduction band and the
valence band of the material (Figure 5.12).
Extrinsic detectors employ donor and acceptor concentrations of 102
to 104 times higher than normally used for semiconductor device opera-
tion. Such high concentrations are required to achieve a reasonable re-
sponsivity performance because only the donor sites or acceptor sites can
174 Chapter 5

excite electrons to higher energy bands — compared to intrinsic detectors,


where every atom can excite an electron. At these high doping levels, the
material can only be used in photoconductive applications (Section 5.8).
Examples of extrinsic detectors are Si:Hg, Si:Ga, Ge:Cu, or Ge:Zn, where
the first symbol indicates the detector bulk material, and the second sym-
bol indicates the doping element. The detection cutoff wavelength λc of
an extrinsic detector is determined by the energy difference between the
dopant energy level and the appropriate band (conduction or valence) of
the bulk material; see Figure 5.12.
At thermal equilibrium, the intrinsic material hole concentration p
(in the valence band) equals the electron concentration n (in the con-
duction band), which is also the intrinsic carrier concentration ni . The
(temperature-dependent) intrinsic carrier concentration is given by 2,10,16,50
   
2πkT 3 − Eg
ni = np = 4
2
2
exp (m∗h me∗ )3/2 . (5.74)
h kT
For doped material the concentrations across the p-n junction are
n2i = nn pn = n p p p , (5.75)
where pn is the hole concentration in n-type material in [cm−3 ],p p is the
− 3
hole concentration in p-type material in [cm ], nn is the electron concen-
tration in n-type material in [cm−3 ], and n p is the electron concentration
in p-type material in [cm−3 ]. The dopant impurity sites will normally be
ionized — holes left behind in the valence band for electrons excited to the
acceptor sites, or from electrons excited from the donor sites to the con-
duction band — thus supporting the free movement of electrons and holes
in the material. Furthermore, thermal excitation is also normally low (few
thermally excited electrons in the conduction band). Under these condi-
tions the electron concentration in an n-type material nn is equal to the
donor concentration nd because the donor electrons are the only electrons
in the conduction band. It then follows that
pn = n2i /nn = n2i /nd . (5.76)
Likewise, the hole concentration in p-type material p p is equal to the ac-
ceptor concentration n a , hence
n p = n2i /p p = n2i /n a . (5.77)

5.5.7 Photon-electron interactions

When a photon is absorbed, two types of electron transitions can occur: in-
traband and interband transitions, 53 as shown in Figure 5.13 (the detailed
Optical Detectors 175

E E
Conduction band

Interband
transition
Intraband k k
transition f hν
i

i
Valence band

Figure 5.13 Intraband and interband transitions of an electron from an initial state i to a final
state f .

situation is much more complex than this). Intraband transition happens


when the electron jumps from one state to another in the same band. This
phenomenon is common in quantum well devices but is outside the scope
of this book.
Optically induced interband transitions occur when the absorption of
a photon excites an electron across a forbidden energy gap to a higher
energy state. Interband transitions are present in all solids. 53 In a semi-
conductor material, the interband transition excites an electron from an
occupied state in the valence band (leaving behind a hole) into an unoccu-
pied excited state in the conduction band. Note the following requirements
and constraints regarding interband transitions: 54,55

1. The photon energy must exceed the material energy bandgap to excite
the electron across the bandgap Etransition = hν ≥ Eg .

2. The transitions can be either direct or indirect. A direct transition occurs


when the crystal momentum is conserved such that ki = k f , where ki
and k f are the initial and final wave vectors, respectively. An indirect
transition involves a phonon, and momentum is not conserved. In this
case the initial and final wave vectors differ by a value equal to the
phonon wave vector q, such that ki = k f ± qphonon . The transitions are
shown in Figure 5.14.
A phonon is a quantum of energy associated with an electron wave’s
interaction with a crystal lattice — it is essentially vibrational heat in
the crystal lattice. 1,56

3. Certain transitions are forbidden, as regulated by selection rules.


176 Chapter 5

E E
Conduction band
Indirect
transition f
f
Phonon
Photon
Direct
transition
hν k k

i Photon i

Valence band

Figure 5.14 Interband transitions: (a) indirect transition involving a phonon and a photon,
and (b) a direct transition.

4. Subject to the Pauli exclusion principle, an interband transition occurs


from an occupied state below the Fermi level to an unoccupied state
above the Fermi level.

5. If the energy gap E(k) between two bands is near-constant over a wide
range in k, photon-induced interband transitions occur more effectively.
It means that there are many initial and final states resonant with this
photon energy.

In photon detectors the transition near the smallest energy gap be-
tween the valence and conduction bands is of main interest. This occurs
where k = 0, which is designated as the Γ point.

5.5.8 Light absorption in semiconductors

The solution of the wave equation through a dielectric medium yields a


complex index of refraction n = nr + i ni , of which both terms alter the
wave’s properties 10,43,49,55,57 (see also Sections 3.4.3 and 4.2.1). The real
refractive index component changes the wave’s velocity (and hence the
wavelength), leading to Snell’s law (the phase velocity across the wave-
front varies, changing the wavefront’s propagation direction). The imagi-
nary component changes the wave’s field strength (radiance). An increase
in the number of interacting particles in the medium (electrons in the case
of metals and semiconductors) leads to an increase in the imaginary re-
fractive index. The imaginary refractive index component is related to the
absorption coefficient of the medium 10 by αλ = 4πni . Photon absorption is
a very complex physical process; the description given here provides only
a brief insight into the underlying processes.
Optical Detectors 177

Light absorption occurs when an incident photon interacts with an


electron in the valence band. The electron absorbs the photon’s energy
and is excited to the conduction band, leaving a hole behind. The radiance
of a light beam propagating along the x direction in a material is given by
Lν ( x) = Lν (0)e−αν x , where αν is the material spectral absorption coefficient
[see Equation (4.4) in Section 4.2.1]. The balance of the photons Lν (0) −
Lν ( x) are absorbed in the material and converted to electrons.
The spectral absorption coefficient has different modes for the various
different excitation transition processes, 10,43,52,55 resulting in the charac-
teristic shapes in the spectral absorption coefficient, Figure 5.15. Direct
transition materials (e.g., GaAs, InP, InSb, and HgCdTe) have absorption
coefficients of the form 2,56
 
αν ∝ hν − Eg ≈ αλc + α0 hν − Eg for hν > Eg . (5.78)

The energy structures in indirect semiconductors (e.g., Si and Ge) are later-
ally displaced, hence phonons are also involved in the excitation process.
Indirect semiconductors have absorption coefficients of the form 2,10,43,56
αν =∝ (hν − Eg ± Ep )2 ≈ αλc + α0 (hν − Eg ± Ep )2 for hν > Eg . (5.79)
The free-carrier absorption coefficient is proportional to the electron and
hole carrier concentration because each carrier site acts as an absorption
site. At energy levels below the bandgap, an exponential absorption takes
place, known as the Urbach tail, 7,10,48,52,56,58–60 where
 
hν − Eg
αν = αλc exp for hν < Eg . (5.80)
kT
The spectral shapes of these absorption processes are shown in Figure 5.16.
The typical absorption coefficient curves shown in Figure 5.15 reflect
specific material samples under specific conditions; different samples and
measurements may differ. The absorption coefficient is a function of tem-
perature, crystal orientation, and impurity concentration (intrinsic or ex-
trinsic material). 61
From Equation (4.4) it follows that the flux at depth d into the detector
will be Φ(d) = Φ(0)(1 − ρ)e−αd . Most of the photons are absorbed within
a distance of 1/α into the absorbing material. For large absorption coef-
ficients, this occurs within a thin layer on the skin of the detector. From
the absorption graphs in Figure 5.15, it follows that photons with longer
wavelength are absorbed deeper into the material, whereas photons with
shorter wavelength are absorbed nearer the surface of the material.
What happens to the photons with wavelengths exceeding λc that are
not absorbed in the material? The material normally has low attenuation
178 Chapter 5

109
Absorption coefficient [m-1]

108 GaAs
GaAs 300 K
107 In0.7Ga0.3As0.64P0.36
6 Ge
10
Si 300 K InP 300 K In0.53Ga0.47As
105 Si InP

104

103
Ge 300 K
102
0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
Wavelength [mm]

Figure 5.15 Photon absorption coefficient for intrinsic materials (adapted 1,10,62 ).
Absorption coefficient

Urbach tail absorption

Ideal semiconductor
absorption

Free carrier absorption

Wavelength

Figure 5.16 Photon absorption coefficient: ideal, Urbach tail, and free-carrier components.
(adapted 56).

at wavelengths beyond λc . This fact is often employed to construct ‘sand-


wich’ detectors that are located in close proximity behind one another.
Each detector in the sandwich is made of a different material, sensitive in
a different spectral band. The detectors are electrically isolated from each
other; i.e., independent detectors.
Every atom in an intrinsic material can absorb photons, but only the
impurity locations in an extrinsic material can absorb photons. The absorp-
tion coefficients in intrinsic detector materials are higher than in extrinsic
detector materials because there are more absorption sites in intrinsic ma-
terials than in extrinsic materials.
Optical Detectors 179

5.5.9 Physical parameters for important semiconductors

Tables A.5 and A.6 contain a summary of physical parameters for selected
intrinsic and compound semiconductors. Material properties are well doc-
umented. 63 It is important to note that all parameters are to be used in
calculations regarding direct transitions near the Γ point, at the minimum
bandgap.

5.6 Overview of the Photon Detection Process

5.6.1 Photon detector operation

If the energy of the photon exceeds the energy bandgap of the material,
an absorbed photon creates an electron-hole pair in the semiconductor
material. The excited electron-hole pair recombines after some time, the
average value of which is the carrier lifetime. The carrier lifetime is not a
material property but rather a function of the application. Detectors that
employ this phenomenon are called photon detectors.
Photoconductive detectors operate on the principle that photons are ab-
sorbed in the bulk of the detector material, and an electron-hole pair is
formed. The electron-hole pair separates and contributes to electrical cur-
rent conduction, thereby lowering the bulk resistance of the detector. The
photon-induced conductance change in resistance can be detected by an
external electronic circuit.
Photovoltaic detectors operate on the principle that electron-hole pairs
formed by photon absorption in the depletion region of a p-n diode are
accelerated across the depletion region. This electron-hole pair contributes
to current flow by injecting minority carriers, thereby causing current flow
through the depletion region. The photocurrent flows through the deple-
tion region, under the built-in bias that exists in the depletion region. The
photocurrent flow can be sensed by an external electronic circuit.

5.6.2 Carriers and current flow in semiconductor material

Current flow in semiconductors may be due to diffusion flow or drift flow


by the carriers (holes and electrons). Diffusion movement of carriers results
from the equalization of nonuniform densities of charges. The principle is
the same as the diffusion of gas particles to remove nonuniformity in a gas
distribution. Whenever a gradient exists in charge densities, the diffusion
current will tend to remove such gradients. Diffusion current does not
require an external field (electric voltage or gas pressure). The diffusion
180 Chapter 5

current density is given by 16,48


dnc
J = −qD , (5.81)
dx

where J is the diffusion current density in [A/m2 ], q is the electronic charge


in [C], D is the diffusion constant in [m2 /s], and dnc /dx is the carrier
density gradient in the material.
Drift movement of carriers occurs when charged carriers are acceler-
ated under an electric field by Ohm’s law:
Jd = σE = nqμE , (5.82)
where Jd is the drift current density in [A/m2 ], σ is the material electrical
conductivity in [ m], E is the electric field strength in [V/m], nq is the
charge density in [C/m3 ], and μ is the charge mobility in the material in
[m2 /(V·s)].
The diffusion constant D and the charge mobility μ are statistical ther-
modynamic phenomena related by the Einstein equation
De D kT
= h = . (5.83)
μe μh q
In the case where a potential gradient and a concentration gradient exist
at the same time in the material, the total electron current is then
de
Je = ne qμe E + qDe , (5.84)
dx
where de/dx is the electron density gradient. If dh/dx is the hole density
gradient, the total hole current is
dh
Jh = nh qμh E + qDh . (5.85)
dx
The first term in these equations is the drift current through the mate-
rial and the second term is the diffusion current. The drift and diffusion
currents are caused by different forces and flow independently from each
other.

5.6.3 Photon absorption and majority/minority carriers

When a material is illuminated with a constant photon flux Φq for a time


period t, then the absolute change in number of holes equals the absolute
change in number of electrons:
Δe = Δh = Φq t. (5.86)
Optical Detectors 181

Ideal detector
1

Quantum efficiency η
Real world detector

External losses:
high surface reflection

Internal losses:
low bulk absorption
0
λcutoff
Wavelength

Figure 5.17 Comparison between ideal and actual photon detector quantum efficiency
(adapted 2).

If this absorption took place in an n-type material, the percentage change in


the number of electrons (majority carriers) is very small (Δe  n) but the
percentage change in the number of holes (minority carriers) is very large
(Δh > p). For practical purposes, the percentage change in majority carri-
ers is so small that it can be ignored. The percentage change in minority
carriers is significant and can effectively be regarded as the primary sig-
nal. Hence, it is generally accepted that the signal is formed by the optical
injection of minority carriers.

5.6.4 Quantum efficiency

The relationship between the number of incident photons on the detector


and the number of generated electrons is called the quantum efficiency η.
Quantum efficiency describes how effectively a photon detector can con-
vert the incoming flux to current. Quantum efficiency has a value between
zero and one, often expressed as a percentage.
Quantum efficiency has two components: (1) internal quantum effi-
ciency indicating how well the detector material can convert photons to
electrons, and (2) external quantum efficiency indicating the fraction of
photons incident on the external surface of the detector that penetrate the
material and are not reflected off the surface.
A detector’s quantum efficiency is not constant but varies with wave-
length, as shown in Figure 5.17. These deviations are particularly apparent
for wavelengths around λc and for λ  λc . At longer wavelengths, the
quantum efficiency is limited by the material’s absorption coefficient (in-
ternal quantum efficiency). At shorter wavelengths, the quantum efficiency
is determined by the reflection losses from the surface of the detector (ex-
ternal efficiency).
182 Chapter 5

10
cy
Responsivity [A/W] ffi cien
ume
ant
%Q
u iency
100 antu m effic
5
50% Qu
η=0.7 η=0.5 HgCdTe
η=0.8
Si InSb
0.0
0.0 1.5 3.0 4.5 6.0 7.5 9.0 10.5 12.0 13.5 15.0
Wavelength [mm]

Figure 5.18 Typical detector spectral responsivity and quantum efficiency.

The material surface reflects a portion ρ of the total incoming flux


by Fresnel reflection (see Section 3.4.4), affecting the external quantum
efficiency. The value of ρ depends on the index of refraction of the detector
material and is given by
 
n2 − n1 2
ρ= , (5.87)
n2 + n1
where n2 is the refractive index of the detector material, and n1 is the
refractive index of the surrounding medium (normally air). For silicon
and germanium detectors the reflection can be as high as 40%. Special
anti-reflection coatings are normally applied to the surface to reduce the
external reflection.
The quantum efficiency — the ratio of charge carriers generated to the
incoming photon count — can be calculated from 2
J
= η = (1 − ρ)(1 − e−αd ), (5.88)
qEq
where J is the current density in [A/m2 ], q is the electronic charge, Eq is the
photon irradiance in [q/(s·m2 )], ρ is the semiconductor Fresnel reflectance,
α is the detector material absorption coefficient (see Section 5.5.8), and d
in [m] is the depth over which absorption takes place in the material (i.e.,
depth of the active region in the detector). It is evident that the quantum
efficiency increases when the reflectance decreases or where the absorption
thickness αl x is high (see Section 4.2.4). The material absorption coefficient
depends on the wavelength, as shown in Figure 5.15.
The peak quantum efficiency in high-performance detectors often ex-
ceeds 0.8. A high quantum efficiency is normally obtained by an optimized
anti-reflection coating on the detector surface. Figure 5.18 shows typical
spectral responsivity curves for three photon detector types. Also shown
are the quantum efficiencies for each detector as well as the 100% quantum
Optical Detectors 183

efficiency line [Equation (5.5)]. Note that the photon detector responsivity
increases toward longer wavelengths, as shown in Figure 5.1. This stems
from the fact that the photon energy decreases toward longer wavelengths
and that larger numbers of photons (hence the larger detector current) are
required to provide one watt of optical power.
The quantum efficiency for extrinsic detectors is lower than the quan-
tum efficiency of intrinsic detectors because there are many more possible
excitation locations in intrinsic detector materials than in extrinsic detector
materials. Typical absorption coefficients for extrinsic materials are less
than 103 m−1 , compared to 103 –107 m−1 for intrinsic materials.
Figure 5.19 shows a comparison of various detectors’ spectral D ∗ per-
formance. The caption in the original publication 12 is as follows:

Comparison of the D ∗ of various commercially available IR detec-


tors when operated at the indicated temperature. Chopping fre-
quency is 1000 Hz for all detectors except the thermopile (10 Hz),
thermocouple (10 Hz), thermistor bolometer (10 Hz), Golay cell
(10 Hz), and pyroelectric detector (10 Hz). Each detector is as-
sumed to view a hemispherical surrounding at a temperature of
300 K. Theoretical curves for the background-limited D ∗ (dashed
lines) for ideal photovoltaic and photoconductive detectors and
thermal detectors are also shown. PC, photoconductive; PV, photo-
voltaic; PE, photoemissive; and PEM, photoelectromagnetic detec-
tor.

Note that some detectors have quite high D ∗ (high quantum efficiency)
values compared to the theoretical limit.

5.7 Detector Cooling

Thermally excited carriers have energy kT, which may exceed the bandgap
in a detector [Equation (5.62)]. The thermally excited carriers contribute to
detector noise and reduce the performance of the detector. The objective
with detector cooling is to achieve kT  Eg . It is evident that materials
with wide bandgaps are less prone to thermal carrier excitation. Long-
wavelength detectors generally operate at temperatures such as 77 K or
even down to 4 K. Modern detector development, and particularly the
use of HgCdTe, enabled LWIR detectors to operate at somewhat higher
temperatures (80 K) than older extrinsic detectors (4 K). Current research
is pursuing detector technologies that would enable even higher operating
temperatures. 7
184 Chapter 5

Figure 5.19 Spectral D ∗ for various detector types. Used with permission from Rogalski. 12
Optical Detectors 185

Window Tcold

Cold shield Front feed through

Detector
focal plane (3) Liquid cryogen
(cold end of cooler)
Cold finger
(4) Discarded gas
Vacuum cools incoming gas

Electrical lead (2) High pressure


gas is pre cooled
Cooler
Rear feed through

(1) High pressure


gas supply

Tambient
(a) (b)
Figure 5.20 Detector vacuum dewar: (a) with wire feed-through and (b) cooler inserted.

Cooling is generally achieved by three techniques: (a) gas/liquid cryo-


gen coolants, 64 (b) thermo-electric cooling, and (c) radiative cooling. Ra-
diative IR detector cooling is only feasible in space, where the ambient
temperature is extremely low. The principle of radiative cooling is energy
loss by thermal (Planck-law) radiation from the hot detector to the low
background temperature (around 3 K) in space.
Gas/liquid cooling employs a substance called a cryogen, which nor-
mally exists as a gas at room temperatures and liquifies at subzero tem-
peratures. The cooling effect of the cryogen is achieved by moving the
substance through a series of pressure-volume-temperature (PVT) phase
changes, utilizing the appropriate physical process at each of the PVT op-
erating points. There are two types of cryogenic coolers: (a) open-cycle
coolers that discard the cryogen after cooling, and (b) closed-cycle coolers
that retain the cryogen in a closed circuit of pipes and reservoirs. 27 Current
research aim to develop micromachined coolers. 65
Detectors operating at very low temperatures are housed in a me-
chanical structure called a dewar, shown in Figure 5.20(a). The dewar is,
in effect, a small thermos flask with the detector device on the inside cold
end of the ‘cold finger’ (the inside tube of the flask). The dewar is evacu-
ated to very high vacuum in order to minimize the heat load on the cooler.
The cooler is inserted into the cold finger and cools down the tip of the
cold finger to the detector operating temperature. The detector tempera-
186 Chapter 5

ture is within a few degrees of the cryogen boiling point. Dewars can be
made of glass, metal, or a combination of glass and metal. In some cases,
the dewar is integrated with the cooler, as a single assembly.
A typical open-cycle Joule–Thomson cooler is shown in Figure 5.20(b).
The process (a) starts with a gas at a pressure of 45 MPa at 300 K, and
then (b) the gas is pre-cooled by the discarded gas prior to (c) expanding
through a very small nozzle, such that the pressure drops to around 1–2
bar, and cools the cryogen to liquification (e.g., 77 K in the case of nitrogen),
at which point the liquid cryogen absorbs the heat from the detector and
evaporates, (d) flowing away from the detector. The cryogen is selected on
the basis of its boiling point and cooling capacity. Popular cryogen gases
include helium (4 K), nitrogen (77 K), and argon (87 K), or a mixture of
gases.
The Stirling-cycle thermodynamic process is used most commonly in
closed-cycle coolers. The cryogen is contained in a sealed piping system
and oscillates between the hot and cold ends. The Stirling engine normally
employs an electric motor (rotary or linear) to compress the gas. Stirling
coolers are available in two configurations: the integral cooler and the split
cooler. The integral cooler combines all mechanics and cryogenics into a
single unit. The split Stirling cooler separates the compression unit from
the cooling unit. The split engine has the advantage of being silent with
no vibration coupling from the engine to the detector. This was partic-
ularly important during the earlier generations of coolers. Research has
resulted in new engines that run much quieter, allowing the integration of
the cooling engine with the cooler.
Thermoelectric (TE) cooler modules are solid state heat pumps that
employ the Peltier effect — the phenomenon whereby the passage of an
electrical current through a junction consisting of two dissimilar metals
results in a cooling effect. When the direction of current flow is reversed,
heating will occur. A thermoelectric module consists of an array of p-
and n-type semiconductor elements heavily doped with electrical carriers,
as shown in Figure 5.21. The array of elements is soldered so that it is
electrically connected in series and thermally connected in parallel. This
array is then fixed to two ceramic substrates; a hot and a cold side. Heat
is absorbed at the cold side of the n- and p- type elements. The electrical
charge carriers (holes in the p-type; electrons in the n-type) always travel
from the cold side to the hot side, and heat is always released at the hot
side of thermoelectric element. The temperature differential between the
hot and cold ends is inversely proportional to the pump load. At higher
heat loads the temperature differential reduces.
The vast majority of thermoelectric coolers achieve temperature differ-
Optical Detectors 187

Heat flow in
(cold side)
Ceramic
substrate
+ + + p type
Semiconductor
+ + + n type elements

+ + + p type
Electrical
+ + + n type & heat
conductor

Heat sink
Heat flow out
(hot side)

Figure 5.21 Two-stage thermoelectric cooler (adapted 66).

ences of no more than 50–60 ◦ C across one level of thermoelectric cooling.


Multi-stage coolers stack multiple coolers, reaching temperatures down to
195–230 K in a three-stage cooler and down to 50 K with ten stages. As
the number of stages increases, the achievable temperature difference in-
creases, but the heat-pumping capacity and overall efficiency decreases. TE
coolers are relatively inefficient, resulting in a significant amount of heat
that must be dissipated from the hot end. Typical currents range from 1–
8 A. Typical supply voltages are 2–15 V. These coolers can cool down heat
loads of 0.1–15 W.

5.8 Photoconductive Detectors

5.8.1 Introduction

Photoconductive detectors are photon detectors that change in conduc-


tivity upon incident flux. This section derives equations describing the
responsivity, frequency response, noise, and D ∗ of a photoconductive de-
tector. Photoconductive detectors are well documented. 2,12,13,19,67

5.8.2 Photoconductive detector signal

The quantum efficiency is given by Equation (5.88), where d is the depth


of the detector along the flux propagation direction. From Equation (5.88)
it would appear that by increasing the thickness d of the detector, a bet-
ter quantum efficiency can be obtained. However, when increasing d, the
188 Chapter 5

incremental contribution of the absorbed flux per unit depth decreases.


Furthermore, large d also increases the shunt conductance effect of the
bulk of the material. The carriers can move freely throughout the volume
of the (deeper) detector, resulting in a smaller change in conductance. The
optimum detector thickness is approximately d = 1/α, achieving a bal-
ance between absorption and carrier shunt conductance in the bulk of the
material.
The conductivity of the detector bulk material is given by
σ = q ( n e μe + n h μ h ), (5.89)
where σ is the conductivity in [/cm], q is the electronic charge, n is the
carrier density in [quanta/cm3 ], and μ is the carrier mobility in [cm2 /(s·V)].
The subscripts e and h denote electron and hole quantities, respectively. In-
cident photon flux creates free carriers in the material Δn, resulting in a
change in conductivity of
Δσ = q(Δne μe + Δnh μh ). (5.90)
Photons absorbed in intrinsic semiconductor material result in an equal
number of holes and electrons (Δne = Δnh ) because the material has rela-
tively high purity with very few traps. In extrinsic detectors, the number
of hole carriers and electron carriers are not equal because the extrinsic
impurities act as traps for one of the two types of carriers.
The resistance in [Ω] of the detector is given by
l k
Rd = = , (5.91)
σwd σ
where l is the length of the detector along the direction between the two
external electrodes, w is the width, and d is the thickness of the detector;
see Figure 5.22. The change in resistance due to photon-excited carriers is
k∂σ
∂Rd = − , (5.92)
σ2
leading to (for intrinsic detectors)
∂Rd ∂σ q(μe + μh )Δn
=− =− , (5.93)
Rd σ σ
where Δn is the number of free photon-excited electron-hole pairs. Δn
depends on the change in incoming photon flux ΔΦ p [q/s] or irradiance
ΔEq in [q/(s·m2 )], the electron lifetime τe , the hole lifetime τh , and the
detector quantum efficiency η by
ηΔΦ p τe ηΔEq τe
Δn = = , (5.94)
lwd d
Optical Detectors 189

RL

Vb
l

w Rd Vd

d
i
Figure 5.22 Photoconductive detector geometry and bias circuitry.

hence
∂Rd q(μe τe + μh τh )ηΔΦ p
=− . (5.95)
Rd lwdσ

The flux in Equation (5.95) is defined in photon quantities. Assuming


monochromatic detector operation, the flux in radiometric terms Φe =
Φq hν can be substituted to yield the fractional change in resistance

∂Rd q(μe τe + μh τh )ηΔΦe λ


= − (5.96)
Rd hclwdq(ne μe + nh μh )
(μe τe + μh τh )ηλ
= − ΔΦe , (5.97)
hclwd(ne μe + nh μh )
where ΔΦe is the monochromatic incoming flux in [W], ne and nh are the
carrier densities in [quanta/cm3 ], and μe and μh are the carrier mobility
in [cm2 /(s·V)]. The subscripts e and h denote electron and hole quantities,
respectively; 0 ≤ η ≤ 1 is the detector quantum efficiency; and l, w, and d
are the detector dimensions in length, width, and depth.

5.8.3 Bias circuits for photoconductive detectors

The photoconductive detector is biased as indicated in Figure 5.22, with a


constant voltage bias supply and a series detector. The bias condition is
i = Vbias /( R L + Rd ), Vd = Vbias Rd /( R L + Rd ). Analysis shows that
∂Vd V ;R
= − bias L 2 , (5.98)
∂Rd ( R L + Rd )
the voltage responsivity with units [V/W] now becomes
∂Vd V R R q(μe τe + μh τh )λη
Rv = = − bias L d . (5.99)
∂Φe ( R L + Rd )2 hclwdσ
190 Chapter 5

The optimal choice for R L is such that ∂Rv /∂R L = 0, which requires
R L = Rd ; then,
   
−RL Vd (μe τe + μh τh ) qλη
Rv = (5.100)
R L + Rd σdlw hc
   
− R L Rd Vd (μe τe + μh τh ) qλη
= (5.101)
R L + Rd l2 hc
= Gc G ph R. (5.102)
The derivation identifies three distinct terms: (a) the bias circuit gain Gc
in units of [Ω] or [V/A], (b) the unitless bulk material photoconductive
gain G ph , and (c) the photocurrent responsivity R in units [A/W]. The
photoconductive gain can be written
Vd (μe τe + μh τh ) ε(μe τe + μh τh )
G ph = = , (5.103)
l2 l
where ε is the electric field across the detector in [V/m]. ε(μe τe + μh τh ) has
units of velocity (carrier velocity), so that l/[ε(μe τe + μh τh )] represents the
transit time the carrier needs to travel along the length of the detector, τt .
The photoconductive gain is then given by
τ
G ph = , (5.104)
τt
which is the shortest carrier lifetime divided by the transit time. G ph there-
fore represents the number of times the carriers can move across the length
of the detector within its average lifetime. Typical values vary from 0.5–1.0
for silicon to 103 –104 for HgCdTe.
Equation (5.100) shows that the responsivity can be increased by re-
ducing the equilibrium conductivity of the detector. The conductivity can
be reduced by reducing the number of free carriers under equilibrium con-
ditions. This can be achieved by cooling the detector and by reducing the
incident background flux on the detector.

5.8.4 Frequency response of photoconductive detectors

The frequency response of a photoconductive detector is determined by


the carrier lifetime. Ignore the thermally generated carriers and assume
that the carrier lifetime is constant and independent of carrier concentra-
tion. If the number of carriers under equilibrium conditions (no illumi-
nation) is given by n0 , the variation in the number of carriers due to the
optical excitation can be denoted by Δn(t) around the nominal n0 , i.e.,
n(t) = Δn(t) + n0 . The change in the variation of optically excited carriers
is given by
dΔn(t) Δn(t)
= ηΦq (t) − , (5.105)
dt τ
Optical Detectors 191

where ηΦq is the optically generated carrier rate, η is the quantum ef-
ficiency, τ is carrier lifetime, Φq (t) is the flux incident on the detector in
[q/s], and Δn(t)/τ is the rate of recombination of free (optically generated)
carriers. Taking the Fourier transform on both sides of Equation (5.105),
ΔN ( f )
i2π f ΔN ( f ) = ηΦq ( f ) − , (5.106)
  τ
1
ΔN ( f ) i2π f + = ηΦq ( f ), and (5.107)
τ
ηΦq ( f )τ
ΔN ( f ) = , (5.108)
1 + i2π f τ
where f is frequency, ΔN ( f ) is the Fourier transform of the number of
free carriers due to optical excitation, Φq ( f ) is the Fourier transform of
the photon flux incident on the detector, and τ is the carrier lifetime. The
frequency response of the detector is then simply
ΔN ( f ) τ
H( f ) = = . (5.109)
ηΦq ( f ) 1 + i2π f τ

Equation (5.108) describes the frequency response of the photon-to-


carrier conversion process, and therefore the frequency response of the
detector output signal relative to the incident optical flux. The responsiv-
ity [Equation (5.103)] of the detector can be increased by increasing the
carrier lifetime. However, increasing the carrier lifetime reduces the high-
frequency response of the detector [Equation (5.109)].

5.8.5 Noise in photoconductive detectors

There are four components that contribute to the total noise in the pho-
toconductive detector: Johnson noise, 1/ f noise, g-r noise due to optical
flux, and g-r noise due to thermally excited carriers: 19
 
Td TL k Iα
it = 4k
2
+ + 1 β + 4q2 ηΦqb G2ph + 4gθ qG2ph . (5.110)
Rd RL f
IR photoconductive detectors are normally cooled as explained in Sec-
tion 5.7. Suppose further that the detector is not used at frequencies
where 1/ f noise dominates, k1 I α / f β  i2t , then two performance-limiting
noises are present: generation–recombination-noise-limited operation and
Johnson-noise-limited operation.
192 Chapter 5

5.8.5.1 Generation–recombination-noise-limited operation

In order to obtain generation–recombination-noise-limited operation, the


Johnson noise i J must be much less than the g-r noise igr , i J  igr or
 
Td TL
4q ηΦqb G ph  4k
2 2
+ . (5.111)
Rd RL
The g-r noise does not depend on the detector temperature. Hence, gen-
eration–recombination-noise-limited performance can be obtained by re-
ducing the Johnson noise by cooling the detector and load resistor to a
temperature TL = Td = T, where

q2 G2ph ηΦqb Reff


T , (5.112)
k
and Reff is the effective parallel resistance of the detector and load resistor.
The NEP of the detector in [W] is then (see Section 7.1)
noise
NEPλ = (5.113)
Rv
igr Δ f Reff
= (5.114)
Rv
 

2hc  λc
 AdΔ f Eq dλη .
= (5.115)
λ 0

Then the D ∗ is given by (see Section 7.1)



∗ Ad Δ f
D (λ) = (5.116)
NEP

λ η
=  λc , (5.117)
2hc Eq dλ
0

where Ad is the area of the detector. Note that the D ∗ depends only
on the background flux Eq ; this condition is also known as background-
limited operation. The dashed line in Figure 5.19 was calculated with
Equation (5.117).

5.8.5.2 Johnson-noise-limited operation

The detector is Johnson-noise limited if

T q2 G2ph ηΦqb
 , (5.118)
Reff k
Optical Detectors 193

where TL = Td = T, and Reff is the effective parallel resistance of the


detector and load resistor. If the load resistor is at the same temperature
as the detector TL = Td ,
 
Td T 4kTd
i2J = 4k + L = . (5.119)
Rd RL Reff
The NEP of the detector in [W] is then
noise
NEPλ = (5.120)
Rv
i J Δ f Reff
= (5.121)
Rv
 
hc 4kTd Δ f
= √ , (5.122)
qηλ G ph Reff

and the D ∗ is given by (see Section 7.1)



∗ Ad Δ f
D (λ) = (5.123)
NEP √
qλη G ph Reff Ad
= √ . (5.124)
2hc kTd

The factor G ph Reff Ad is not intrinsic to the material it depends on the
detector geometry and size. HgCdTe has low Reff but very high gain G ph ,
resulting in a high D ∗ .

5.9 Photovoltaic Detectors

5.9.1 Photovoltaic detector operation

A photovoltaic detector is a device where photon-generated carriers are


converted to a current and swept out of the device by an internal potential
inherent in the device’s construction. The device described in this section
is a p-n diode, also known as a photodiode. 2,3,7,10,12,16,19,22,46,67,68 Electron-
hole pairs formed as minority carriers in the depletion region or within
a diffusion length of the space-charge volume depletion region contribute
to the current flow. The photocurrent generated in the p-n diode creates
a reverse current flow, shifting the current–voltage curve toward a nega-
tive current. Figure 5.23 provides a conceptual illustration of p-n and pin
photodiode construction.
If an n-impurity material is doped with p-type dopants (more com-
monly done), or if very pure (intrinsic) semiconductor material is doped
194 Chapter 5

p type n type p type i type n type

Diffusion length

Diffusion length
hν Depletion

Diffusion length

Diffusion length
Depletion hν
region region

+ Diffusion + Diffusion
+ + Drift
Diffusion
+ + Drift
Diffusion
Drift Drift

d1 d2 d1 d2
Potential

Potential
Diffusion
carriers
Diffusion
carriers
d d
Carriers contributing Carriers contributing
Radiance

Radiance
to current flow to current flow

d d
(a) (b)

Figure 5.23 Photovoltaic detector construction: (a) p-n diode and (b) pin diode (adapted 68).

with p-type and n-type dopants (less commonly done), the interface be-
tween these two doped regions is called the p-n junction. The n material
has an excess of electrons, leaving a positive ion in the crystal lattice; in
contrast, the p material accepts an electron from the lattice, leaving a neg-
atively charged ion. The free electrons and holes will diffuse across the
junction, leaving a region with positive and negative ions, but with no free
charge carriers — this is called the depletion region. There are no free
carriers in the depletion region, resulting in an internal field across the
depletion region. Real-world detectors are complex devices designed to
locate the depletion layer as close to the detector front surface as possible.
Various techniques are also used to optimize detector size and respon-
sivity, and to reduce noise. Figure 5.24 shows the electronic and energy
state in, and around the p-n junction. Note in particular the concentration
profiles along the depth of the p-n junction. 46
If a photon is absorbed in the depletion region, the resulting electron-
hole pair forms minority carriers in the depletion region. These minority
carriers are accelerated and swept out of the depletion region under the
built-in electric field present across the depletion region. The resultant
minority current results in a measurable current on the device’s terminals.
The operation of a photovoltaic detector and a photoconductive detec-
tor differs in the sense that any carrier pair created in the photoconductive
detector contributes to the lower detector resistance. In a photoconductive
Optical Detectors 195

Majority Majority
carrier (holes) Acceptor ion Donor ion carrier (electrons)
+ Diffusion
+ + + Diffusion
+ +
+ + + +
+
+ + + + + + + +
p type n type nd
na nn
pp
ni ni Concentration
profiles
np pn

Fixed space
+ charge density
0

0 Electric field
intensity

qVd
EC
EF Energy bands:
unbiased
Eg p-n junction
EV

q(Vd+Vr)

EC
EF
Energy-bands:
reverse biased
Eg p-n junction

EV

Figure 5.24 The p-n diode junction, energy diagrams, and energy bands.
196 Chapter 5

detector, there is no requirement that the photon be absorbed in any par-


ticular spatial region in the detector because an electrical field is applied
over the whole detector. In the photovoltaic detector, the field only exists
across the depletion region (in the absence of an external bias voltage).
The photons must therefore be absorbed within or near the depletion region,
otherwise the electron-hole pair recombines without ever contributing to
the current flow. The diffusion length is a material property that indicates
the radius of a sphere beyond which the electron-hole pair will probably
recombine within the carrier lifetime. Electrons absorbed within one dif-
fusion length from the depletion region will reach the depletion region
and be swept out as signal. For any electron-hole pair created beyond one
diffusion length from the depletion region, there is a near-zero probability
that the hole or electron will diffuse into the depletion region and will be
detected. The diffusion length for holes and electrons are, respectively,

Lh = kTμh τh /q (5.125)

Le = kTμe τe /q, (5.126)
where μ is the carrier mobility, and τ is the carrier lifetime.
The spectral responsivity of a photovoltaic detector is described in
Equation (5.5). The quantum efficiency shown in Equation (5.88) assumes
that all photons absorbed in the material are converted to electrons. This
condition is not met in the case of photodiodes because electron-hole pairs
formed beyond the diffusion length of the depletion region do not con-
tribute to current flow. For a photovoltaic detector the quantum efficiency
is given by
η = (1 − ρ)(e−αd1 − e−αd2 ), (5.127)
where d1 and d2 > d1 define the depth of the active region (depletion
width plus twice the diffusion length). From Equation (5.127) it would
appear that by increasing the thickness of the depletion region d2 − d1 of
the detector, a better quantum efficiency can be obtained. This is the prime
motivation behind the silicon pin detector, where an intrinsic layer is used
to increase the depletion width.

5.9.2 Diode current–voltage relationship

The I-V curve describes the operating point of a diode by relating the
voltage across the terminals and the current flowing out of the terminals.
All diodes have this behavior (see also Section 9.3.2.4). The diode current is
related to the voltage across the device by the nonlinear function: 2,3,10,12,16

I = Isat eqV/(kTβ) − Isat − I ph , (5.128)


Optical Detectors 197

where I ph is photocurrent resulting from the absorbed photons [Equa-


tion (5.4) or (5.5)], Isat is the reverse-bias-saturation current (I > 0 implies
forward-bias current), R0 is the dark resistance, V is the voltage across the
diode (V > 0 implies forward bias), T is temperature, q is the charge on
an electron, k is the Boltzmann constant, and β is the material-dependent
‘nonideality’ factor 2,12 (with value between 1 and 2). When the diffusion
current dominates (e.g., silicon), β = 1. If the recombination current dom-
inates, β = 2. The symbol for reverse-bias-saturation Isat is also written as
I0 in some texts.
The first term, Isat eqV/(kTβ) , is the diffusion current, comprising the
electrons (holes) in the conduction (valence) band of the n-type (p-type)
material that diffuse into the junction with sufficient energy to overcome
the potential barrier between the n- and p-type regions. The second term,
Isat , results from the electrons (holes) in the p-type (n-type) material that
are thermally excited. If these thermally generated electrons (holes) en-
counter the junction, they are attracted into the n-type (p-type) region,
independent of the existence of an applied voltage. Note that the pho-
toinduced current flows in the same direction as the reverse-bias current
because the photoinduced carriers are minority carriers (see Figure 5.25).
Isat = βkT/(qR0 ) is the reverse-saturation current given by 2,16
 
n p De pn Dh
Isat = qAd + , (5.129)
Le Lh
where Ad is the detector junction area in [m2 ], n p is the electron concen-
tration in the p-type material in [cm−3 ], De is the diffusion constant for
electrons in [cm2 /s], Le is the diffusion length for electrons in [cm], pn is
the hole concentration in the n-type material in [cm−3 ], Dh is the diffusion
constant for holes in [cm2 /s], and Lh is the diffusion length for holes in
[cm]. Using Equations (5.76) and (5.77) leads to
 2 
ni De n2i Dh
Isat = qAd + . (5.130)
n a Le nd Lh

√ L is related to the diffusion constant D and carrier


The diffusion length
lifetime τ by L = Dτ, hence
 2 
ni Le n2i Lh
Isat = qAd + . (5.131)
n a τe nd τh

5.9.3 Bias configurations for photovoltaic detectors

The photovoltaic detector can be biased in three quadrants and five op-
erating conditions. Strong forward bias is applied to diodes to generate
198 Chapter 5

I
+Vb -
Rf
p n +
-
p n
+
C Open circuit /
F forward biased C
Reverse bias F
V
V

Vos
V
Iph
RL
Sun cell
Poor detector

Ideal detector Short circuit /


p n no bias
Rf
p n
-
C
F C +
Avalanche F
V
V

Figure 5.25 Bias configurations and energy bands for various operating conditions.

light (LEDs and laser diodes). 56,69 This section only considers reverse or
small forward bias as indicated in Figure 5.25. All of the bias conditions
described in the following sections are commonly found in various appli-
cations.

5.9.3.1 Reverse-bias operation of photovoltaic detectors

Under reverse bias, the electron-hole pair generated by the photon forms
minority carriers in the depletion region. The hole and electron are swept
out by the electric field in the device. All of the electron-hole pairs are ex-
ternally observed as a current flowing through the device; see Figure 5.26.
In Figure 5.26 minority carriers are indicated by small circles, and major-
ity carriers are indicated by large circles. If the reverse-bias voltage is too
high, the detector may enter the avalanche regime. Some detectors are
designed to operate in the avalanche regime, but special semiconductor
and electronics design techniques are required to construct an avalanche
device that can operate reliably. The avalanche detector is also very sensi-
tive to bias voltage and temperature variations, and special power-supply
Optical Detectors 199

Diffusion length Diffusion length


Depletion region
Acceptor ion Donor ion

+ + + + +
+ + + + +
+ +
+ +
+ + + + + + +
+

p type Drift + Diffusion n type


Diffusion Drift

hν q(Vd+Vr)

+
EC
EF
+ hν Eg

+ EV

Figure 5.26 The illuminated p-n junction under reverse-bias conditions.

techniques are required to drive an avalanche detector reliably.


The diode current–voltage relationship, under reverse bias V = −Vbias ,
is as follows:
I = Isat e−qVbias /(kTβ) − Isat − I ph . (5.132)
If the device reverse-saturation current is well behaved, e−qVbias /(kTβ) → 0
and I = − Isat − I ph . The detector signal is proportional to the photon
flux plus the saturation current. The saturation current depends strongly
on temperature and greatly influences the detector operation and perfor-
mance. The reverse-bias mode is commonly used to reduce the depletion
region capacitance because an increase in reverse-bias voltage widens the
depletion-layer C = A/(d2 − d1 ). For a silicon detector the depletion
layer capacitance reduces by 70% for every doubling in reverse-bias volt-
age (for reverse voltages beyond a few volts). The wider depletion layer
also results in improved quantum efficiency because a larger volume of the
semiconductor is under the depletion-region potential.
The reverse-biased detector electronic circuit model is shown in Fig-
200 Chapter 5

Open circuit

Lp Reverse bias
Rs i
Solar
cell

Short
circuit
Iph Isat In Il
Cp Cd Rd
v RL Rb

Rs Lp

Figure 5.27 Detector circuit model under all of the bias conditions. I ph is the signal current,
In is the noise current, Il is the temperature-dependent leakage (dark) current, Isat is the
saturation current, Cd is the depletion-layer capacitance, C p and L p are the packaging and
lead capacitance and inductance, Rd is the detector dynamic resistance (partially forward-
biased diode), Rs is the series lead, contact, and bulk resistance, Rb is the detector bias
resistance, and R L is the load resistance.

ure 5.27. Detector data sheets sometimes indicate typical values for these
various circuit elements, but the designer must often estimate values for
some detector parameters. The detector capacitance depends strongly on
the reverse-bias voltage as indicated above. The detector dynamic resis-
tance Rd is equal to the reciprocal of the slope of the I-V curve under
reverse bias. Silicon pin detectors have dynamic resistance values of sev-
eral MΩ. Detectors made from other detector materials, such as InSb, can
have much-lower dynamic resistances. The leakage current is the reverse-
saturation current. Note that if the detector has a low dynamic resistance,
an increase in reverse-bias voltage will also increase the leakage current
through the detector. This is not evident in silicon pin detectors but detec-
tor performance degrades under high reverse voltage for InSb and HgCdTe
detectors.

5.9.3.2 Open-circuit operation of photovoltaic detectors

The photon flux creates minority carriers that are swept out across the
depletion layer. However, once out of the depletion layer, the carriers have
nowhere to go, and an accumulation of charge takes place. The increase
in charge results in an increase of forward voltage across the junction.
As the forward voltage across the junction increases, the forward current
through the device increases, depleting the accumulation of charge. Under
conditions of infinite load impedance, the reverse photocurrent results in
Optical Detectors 201

Diffusion length Diffusion length


Acceptor ion Depletion region Donor ion

+
0
+ + + + 0
+ + + +
+ + + +
+ + + +
+ + + +
p type Drift + Diffusion n type
Diffusion Drift

hν q(Vd Vq)

EC
EF

Eg
+ Forward bias
current flow
+

EV
+

Figure 5.28 The illuminated p-n junction under open-circuit conditions.

exactly the same magnitude forward current, resulting in a zero net output
current. Note that in this case the voltage across the diode is equal to
the depletion-region barrier voltage. The diode therefore sinks all of the
photocurrent under its own voltage; none flows out of the device (see
Figure 5.28).
The diode load is a high impedance, such that the detector external
current approaches zero, I = 0, following from Equation (5.128)
 
βkT I ph
V = ln +1 (5.133)
q Isat
 
βkT ηqλΦe
= ln +1 . (5.134)
q hcIsat

The open-circuit output voltage is proportional to the logarithm of the


incident flux. This bias mode can therefore be used to build light meters
that respond over very wide ranges in a logarithmic fashion.
The detector circuit model for this mode is shown in Figure 5.27. In
this mode, the depletion region is very ‘thin’ resulting in a high depletion
capacitance. The detector is therefore much slower than under reverse-bias
conditions. The detector output voltage is temperature dependent, as can
202 Chapter 5

be seen in Equation (5.134) (most logarithmic converters relying on a diode


I-V characteristic are very sensitive to temperature variations). It is very
noisy, with mainly flicker noise. The thinner depletion region also results
in a smaller quantum efficiency.

5.9.3.3 Optimal power transfer in photovoltaic detectors

Silicon solar cells are operated in a mode to obtain maximum power trans-
fer. The solar cell is allowed to deliver power into a resistor, such that it is
neither short-circuit nor open-circuit. The circuit operation is halfway be-
tween the open-circuit and short-circuit bias conditions already discussed.
A high-value load resistor will cause the internal diode to ‘sink’ the pho-
tocurrent internal to the device. On the other hand, a zero-value load
impedance means that there is no power delivered into the load.

5.9.3.4 Short-circuit operation of photovoltaic detectors

In the short-circuit bias condition V = 0, hence I = − I ph . The detector


current is directly proportional to the photocurrent and therefore directly
proportional to the photon flux absorbed in the depletion region. If the
detector has high linearity due to high dynamic resistance (for example, a
silicon-pin diode), this bias condition can be used to construct an accurate
optical flux meter (called a radiometer or optical flux meter).
The detector circuit model for this mode is shown in Figure 5.27. Note
that in this case the voltage across the diode is zero, and the diode has
a high dynamic resistance. The dynamic resistance can be in the MΩ
ranges for a silicon-pin or small IR detector, but considerably lower for
large IR detectors. The capacitance of the depletion layer is high because
the depletion layer is not as wide as it can be under reverse bias.

5.9.4 Frequency response of a photovoltaic detector

The frequency response of a photovoltaic detector is determined by two


factors: (1) the time taken by the carriers to move through the depletion
layer and (2) (primarily) by the detector capacitance and load-resistance
time constant.
If the number of carriers moving through the depletion layer under equi-
librium conditions, with no illumination, is given by n0 , the variation in the
number of carriers due to the optical excitation can be denoted by Δn(t)
around the nominal n0 , i.e., n(t) = Δn(t) + n0 . The change or variation
Optical Detectors 203

due to optically excited carriers is given by


dΔn(t) Δn(t)
= ηΦq (t) − , (5.135)
dt τ

where ηΦq is the optically generated carrier rate, η is the quantum effi-
ciency, Φq (t) is the flux incident on the detector in [q/s], and Δn(t)/τ is
the rate of carriers moving through the depletion layer. Taking the Fourier
transform on both sides of Equation (5.135),
ΔN ( f )
i2π f ΔN ( f ) = ηΦq ( f ) − (5.136)
  τ
1
ΔN ( f ) i2π f + = ηΦq ( f ) (5.137)
τ
ηΦq ( f )τ
ΔN ( f ) = , (5.138)
1 + i2π f τ
where f is frequency, ΔN ( f ) is the Fourier transform of the number of
free carriers due to optical excitation, Φq ( f ) is the Fourier transform of
the photon flux incident on the detector, and τ is the time taken for the
carrier to move through the depletion layer. The frequency response of the
detector is then
ΔN ( f ) τ
H( f ) = = . (5.139)
ηΦq ( f ) 1 + i2π f τ

How long does the carrier take to move through the depletion layer?
Consider a silicon diode with the following characteristics. The carrier mo-
bilities for carriers in silicon are 500 (holes) and 1300 (electrons) cm2 /(V·s).
A typical silicon-pin detector depletion width is 50 µm under an applied
20-V reverse voltage. The carrier transit time can be calculated from the
above information to be of the order of 1 ns. This silicon detector should
be able to internally respond to signals well into hundreds of MHz and,
with some care, even into the GHz frequencies.
The operational speed of most photovoltaic detectors are limited by
their operation in real-life capacitive electronic circuits. The depletion re-
gion capacitance, together with the series resistance and load resistance,
provides a very tough design challenge if high operating speeds are re-
quired.

5.9.5 Noise in photovoltaic detectors

The photovoltaic detector creates shot noise by virtue of its potential bar-
rier across the depletion layer, and Johnson noise in its dynamic resistance.
204 Chapter 5

The dynamic resistance at zero-bias voltage is given by


 
dI −1 βkT
R0 = − = . (5.140)
dV qIsat
V =0

The Johnson noise is then


4kT 4qIsat 2(2qIsat )
i2n = = = , (5.141)
R0 β β
which has the same form as shot noise! Recall that the current in the
photovoltaic detector is given by

I = Isat eqV/(kTβ) − Isat − I ph . (5.142)

The first term is the diffusion current, the second term is the thermally
generated current, and the third term is due to the photocurrent. Each
of these terms is an independent current and creates noise statistically
independent from the other currents and must be added vectorially. 2 The
shot noise PSD in [A2 /Hz], created in the detector, is therefore given by

i2n = 2qIsat eqV/(kTβ) + 2qIsat + 2qI ph . (5.143)

Using Equation (5.140) it follows that


 
βkT qV/(kTβ) βkT
in = 2q
2
e + + Φq , (5.144)
qR0 qR0
where β is the diode nonideal factor, T is temperature in [K], R0 is the dy-
namic resistance under zero-bias conditions, and Φq in [q/s] is the back-
ground flux falling in on the detector, given by
 λc
Φq = ηEq ( Tb ) Ad dλ, (5.145)
0

where the integral is calculated over the detector’s sensitive spectral range,
η is the detector quantum efficiency, Ad is the detector area in [m2 ], and Eq
is the background photon irradiance in [q/(m2 ·s)] for a thermally radiating
background at temperature Tb in [K]. The NEP of the detector in [W] is
then
noise
NEPλ = (5.146)


hc i2n Δ f
= (5.147)
ηqλ
   
hc Δ f βkT kTβ
qV βkT 4kT
= 2q e + + ηqΦq + . (5.148)
ηqλ qR0 qR0 R0
Optical Detectors 205

The detector D ∗ is given by



∗ Ad Δ f
Dλ = (5.149)
NEPλ

ηqλ Ad Δ f
=  (5.150)
hc i2n Δ f

ηqλ Ad Δ f
=   (5.151)
βkT qV/( kTβ ) βkT
hc 2qΔ f qR0 e + qR0 + ηqΦq + 2kT
qR0
ηλ
= √  βkT . (5.152)
βkT
hc 2 q2 A R eqV/(kTβ) + q 2 A d R0
+ ηEq + 2kT
q 2 R0 A d
0 d

5.9.5.1 Background-limited operation of photovoltaic detectors

If the noise induced by the background flux exceeds the combined thermal-
generation current noise, diffusion current noise, and Johnson noise,
βkT qV/(kTβ) βkT 2kT
ηEq  e + 2 + , (5.153)
q2 A d R 0 q A d R 0 q2 R 0 A d
the D ∗ is given by

∗ ηλ λ η
D = √   = √  λc . (5.154)
λ hc 2 Eq dλ
hc 2 η 0 c Eq dλ 0

Compare this equation with Equation (5.117) for the photoconductive


detector. The background-limited D ∗ for a photoconductive
√ and photo-
voltaic detector differs only by a factor of 1/ 2. Equation (5.154) can be
used to determine the upper-limit D ∗ that can be expected from any de-
tector for a given background. Figure 5.29 shows the theoretical limit for
an ideal detector with given cutoff wavelength λc against a background of
given temperature. In practice, all real detectors fall short of this theoreti-
cal prediction.

5.9.5.2 Detector-limited operation: short-circuit mode

Consider a diode with zero-bias voltage eqV/( βkT ) → 1. If the noise induced
by the background is less than the internal detector noise
 λc
2kT 2βkT
2
+ 2 η Eq dλ, (5.155)
q R0 A d q R0 A d 0
then the D ∗ is given by

∗ ηλ qηλ Ad R0
D = √  2βkT =  . (5.156)
hc 2 q2 R A + 2kT 2hc ( β + 1)kT
0 d q 2 R0 A d
206 Chapter 5

1020
Background temperature 1 K
10 18 2K
λc [cm·Hz /W]

4K
1016
½

10 K
25 K
1014
77 K
D*

1012 195 K
300 K
500 K
1010 0
10 101 102 103
Peak wavelength λc [mm]

Figure 5.29 Theoretical limits to D ∗ for given detector cutoff wavelength and background
temperatures.

Equation (5.156) indicates that for a given temperature T and quantum


efficiency η, the detector D ∗ is determined by R0 Ad , where Ad is the detec-
tor area, and R0 is the zero-bias dynamic resistance. The R0 Ad product is
an indication of the dark current flowing through the detector and there-
fore of the noise due to the dark current. Compare Equation (5.156) with
Equation (5.124) for a photoconductive detector.
At large reverse-bias voltage the potential barrier is increased, reduc-
ing the number of diffusion carriers crossing the junction. It would appear
that the noise can be reduced by applying a large reverse-bias voltage;
βkT −qV/(kTβ) βkT βkT
e + 2 → 2 . (5.157)
q2 A d R0 q A d R0 q A d R0
In practice, this never materializes. At high reverse-bias voltage, the noise
increases due to 1/ f noise in Isat . Most demanding, low-noise, IR-detector
applications are designed to operate at near-zero-bias voltage. One ex-
ception to this rule is high-performance silicon pin diodes, where the in-
creased reverse-bias voltage has only a small effect on increasing the noise,
but significantly reduces the depletion layer capacitance.

5.9.5.3 Detector-limited operation: open-circuit mode

Under open-circuit bias conditions the detector drives an infinitely high


impedance, and there is no external current flow. This implies that the
internal diode must sink all of the photocurrent: the signal current flows
twice through the junction, firstly during generation, and secondly dur-
ing sinking. These two currents are statistically independent so that the
noise power adds. Hence, the open-circuit bias configuration increases the
Optical Detectors 207

noise by 2 over the short-circuit configuration. The D ∗ is derived from
Equation (5.154) as

∗ λ η
D =  λc . (5.158)
2hc E q dλ
0

Equation (5.158) confirms that the open-circuit bias mode is noisier than
the short-circuit bias mode.

5.9.6 Detector performance modeling

This section demonstrates the development of a simple detector model


suitable for high-level system modeling as considered in this book. The
performance of the detector is described in terms of its figures of merit.
Most of these figures of merit are described in Sections 7.1.2 and 5.3.11. A
model is developed for a single-element InSb photovoltaic detector. The
equations used and material parameters are detailed in the following de-
scription. The material parameters are listed in Appendix A. The current–
voltage and spectral results are shown in Figure 5.30. The data and code
for this model are available on the pyradi website. 70
The steps followed are as follows:

1. Calculate the bandgap Eg of the InSb material at the detector operating


temperature (80 K), using Equation (5.3), with values A = 6 × 10−4 and
B = 500.0 (Table A.6). The resulting bandgap is 0.233 eV.

2. The spectral absorption coefficient is calculated by using Equations (5.78)


and (5.80) and the data in Table A.6. The spectral absorption coefficient
is shown in Figure 5.30(a).

3. The quantum efficiency is calculated by using Equations (5.87) and


(5.88), with a refractive index of 3.42 for the detector and 1.0 for air,
a depletion layer thickness of 5 µm, and the spectral absorption coeffi-
cient calculated in the previous step. The spectral quantum efficiency is
shown in Figure 5.30(b).

4. The spectral responsivity is calculated using the spectral quantum effi-


ciency and Equation (5.5). The gain of the photovoltaic detector is 1.0.
The spectral responsivity is shown in Figure 5.30(c).

5. Calculate the spectral photon and radiant irradiance on the detector for
a target blackbody source with unity emissivity and temperature Ts =
208 Chapter 5

(a) Absorption coefficient (b) Quantum efficiency


107 0.72
Absorption Coefficient [m-1]

0.64

Quantum efficiency
0.56
0.48
0.40
106
0.32
0.24
0.16
0.08
105 0.00
1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5
Wavelength [mm] Wavelength [mm]

(c) Responsivity (d) Current-voltage relationship


2.5 103
Negative current
Positive current
102 ( ) ( )
Responsivity [A/W]

2.0
Current [mA]

Test source
101 Photon flux
1.5 induced current
0
10 ( )
1.0 Background
10-1
( ) Dark
0.5 10-2

0 10-3
1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 −200 −160 −120 −80 −40 0 40 80 120
Wavelength [mm] Voltage [mV]

(e) Noise equivalent power ´1011 (f) D* Specific detectivity


1.20
7.2
Detectivity [cm·Hz½/W]

6.4 1.05
5.6 0.90
NEP [pW]

4.8 0.75
4.0
0.60
3.2
0.45
2.4
1.6 0.30
0.8 0.15
0.0 0.00
1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5
Wavelength [mm] Wavelength [mm]

Figure 5.30 InSb detector model results: (a) spectral absorption coefficient, (b) quantum
efficiency, (c) spectral responsivity, (d) current–voltage relationship, (e) spectral NEP, and
(f) spectral specific detectivity.
Optical Detectors 209

2000 K. The source area As = 33 mm2 , located a distance R = 100 mm


from the detector. The irradiance is calculated as
Esλ = λ Mλ ( Ts )ω/π = M ( Ts ) As /(πR2 ), (5.159)
where the spectral exitance is given by Equations (3.1) and (3.3) for
radiant and photon exitance, respectively.
6. Calculate the spectral photon and radiant irradiance on the detector for
a hemispherical background blackbody source with unity emissivity
and temperature of 280 K. The calculation is done as for item 5 above
but for an hemispherical solid angle.
7. Calculate the current through the detector for source and background
irradiance using a simplified version of Equation (6.17):
 ∞

iR = A d R Esλ Rλ dλ, (5.160)
0
using the spectral responsivity calculated in item 4 above.
8. Calculate Isat . First, calculate the carrier diffusion lengths Le and Lh
by using Equations (5.125) and (5.126). Second, calculate the intrin-
sic carrier concentration ni using Equation (5.74). Equations (5.76) and
(5.77) the donor concentration, is used to determine the concentration
of carriers. Finally, calculate the reverse-saturation current with Equa-
tion (5.131).
The values used in this calculation for the InSb detector are as follows:
electron mobility 7,12 100.0 cm2 /(V·s); electron lifetime 12 1 × 10−8 s; hole
mobility 12 1.0 cm2 /(V·s); hole lifetime 12 1 × 10−8 s; electron effective
mass 0.014 of electron mass; hole effective mass 0.43 of electron mass;
acceptor concentration 1 × 1016 m−3 ; donor concentration 1 × 1016 m−3 ;
Eg as calculated in item 1 above; detector temperature of 80 K; and
detector area 200 µm by 200 µm.
9. Calculate the current–voltage relationship using Equation (5.128). The
saturation current Isat is calculated as explained in item 8 above. The
photocurrent is determined from Equation (5.4) or (5.5). The value of
β is determined as explained in Section 5.9.2. The value for β = 1.7 is
used here because InSb operation is dominated by g-r current. 12
The current–voltage relationship can be calculated for different back-
ground flux values, such as ‘dark’ (I ph = 0), or for different irradiance
values, such as those calculated in items 5 and 6 above.
Figure 5.30(d) shows the current–voltage relationship for three cases:
(a) the dark current (i.e., no background flux), (b) hemispherical back-
ground flux at 280 K, and (c) the target flux. Note that the y axis is
210 Chapter 5

logarithmic to show detail — negative values are shown positive be-


cause the logarithm of a negative number is not defined.

10. Equation (5.140) is used to calculate the detector’s dynamic resistance


under zero-bias voltage, in this case using the reverse-saturation cur-
rent calculated in item 8 above. Equation (5.141) calculates the Johnson
noise. Equation (5.143) calculates the noise PSD for the three current
components: (a) diffusion, (b) thermal excitation, and (c) photocurrent.
Adding all four components in quadrature provides the noise in the
detector.
In the example calculation, the value for R0 was found to be 180 kΩ.
Over the 100-Hz noise bandwidth, the Johnson noise is 1.563 × 10−12 A,
the shot noise is 3.956 × 10−12 A, and the total noise is 4.254 × 10−12 A.

11. The NEP is determined from Equation (5.25), with the spectral respon-
sivity calculated in item 4 and the noise current calculated in item 10.
The NEP is shown in Figure 5.30(e).

12. The D ∗ is calculated from the area of the detector, the noise bandwidth,
the noise, and the responsivity, by Equation (5.26). The responsivity is
shown in Figure 5.30(f).

5.10 Impact of Detector Technology on Infrared Systems

The requirements for future detector technology are driven by convenience


needs (cost, size, weight, and power) as well as new capabilities (multicolor
capability, larger arrays). 71 In order to meet these future needs, current
technologies are optimized incrementally and new technologies are being
developed. 8 High-performance IR imaging is the single-most dominant
driver in current research. In the medium term, system designers can
expect a wide range of detector products, from top-end large multicolor
arrays to low-end inexpensive thermal detectors. Several technology status
reviews are available. 8,27,41,72
The cost of high-performance IR detector arrays precludes their wide-
spread use in consumer electronics. Recent years have seen meteoric ad-
vances in uncooled detector array technology, both in performance im-
provement and cost reduction. Microelectromechanical systems (MEMS)
technology provided the means to construct thermal detector elements
with low heat capacity and short time constants. Some of these detector
arrays achieve 0.02–0.05-K NETD below 100-Hz noise bandwidth.
√ Thermal
detector elements achieve D ∗ values of 108 to 109 cm· Hz/W. 8 Focus is
Optical Detectors 211

now on lowering cost in quantity production. The number of detector ele-


ments per chip follows Moore’s law (doubling the number of pixels every
18 months). The availability of these low-cost IR detector arrays opens up
new applications in the consumer market.
Detector cooling requirements are major cost, size, weight, and power
drivers. Traditional technologies are incrementally improved to support
operation at higher temperature. A few decades ago, long-wave photon
detectors had to be cooled to 4 K. Present-day technology requires long-
wave detectors to operate at 77–85 K. In the near-term future the require-
ment for low operating temperatures will be relaxed. Background-limited
operation can be achieved at a detector temperature of 100–120 K for LWIR
detectors and 160 K for MWIR detectors. 7,8
Array sizes are ever increasing with megapixel IR staring array de-
tectors readily available at competitive prices. Array sizes in conventional
technology are approaching the limit of practical element size — larger ar-
rays will require larger detectors and concomitant scale increase in optics.
Large optics represent exponential increase in system cost. Hence, several
new technologies are investigated to reduce pixel size. Staring array detec-
tors in the LWIR are subject to limited charge accumulation in the read-out
electronics — a particular problem in the LWIR because of the large pho-
ton flux at the longer wavelengths. This remains a challenge to be resolved
by new capacitor technologies.
HgCdTe is a very versatile material, supporting spectral band opti-
mization and multicolor arrays. The material is subject to several issues
related to a weak Hg-Te bond, but many of these have been resolved. None
of the newer materials yet provide fundamental advantages over the estab-
lished HgCdTe technology base, which is expected to remain as a material
of choice in many detector solutions. HgCdTe technology is mature in
the MWIR but less so in the LWIR spectral band. InAsSb can be tuned to
provide LWIR spectral coverage, with similar predicted performance. 8 In
the shorter-wavelength bands there are many new material types, offering
high performance, low noise, and tunable alloys. The InGaAs alloy compo-
sition can be adjusted to enable detectors from 1.6–2.6 µm. In0.53 Ga0.47 As
provides a viable alternative to Ge.
QWIP detectors rely on bandgap alteration, such that new energy
states are created with bandgaps optimized for the required spectral range.
QWIP detectors are manufactured in GaAs/AlGaAs for MWIR and LWIR
applications. HgCdTe still outperforms QWIP detectors in terms of spec-
tral range and D ∗ . QWIP detectors are more uniform than HgCdTe detec-
tors, which eases the burden of nonuniformity correction. QWIP detectors
also lend themselves better to very large arrays. QWIP detectors are less
212 Chapter 5

susceptible to high background photon flux (but also less sensitive to target
signals) because the detector spectral width is narrower than HgCdTe.
Multicolor detectors are currently in advanced research, and high-
performance detectors should appear on the market in the short-to-medium
term. One approach is to build a HgCdTe shorter-wavelength detector on
top of a longer-wavelength detector. Another approach uses monolithic
structures with different spectral responses as already reported in QWIP
IR photodetectors. 73,74
Several new technologies are being investigated, including quantum
dot detectors, 75 nearly matched lattice detectors, 76 and strained-layer su-
perlattice detectors. 77 Strained-layer superlattice detectors have bandgaps
smaller than any of the constituent materials. Quantum dot detectors con-
fine carriers in three-dimensional space using nanostructures. These tech-
nologies are still in the research phase.

Bibliography
[1] Singh, J., Electronic and Optoelectronic Properties of Semiconductors Struc-
tures, Cambridge University Press, Cambridge, UK (2003).

[2] Dereniak, E. L. and Boreman, G. D., Infrared Detectors and Systems,


John Wiley & Sons, New York (1996).

[3] Rogalski, A., Infrared Detectors, 1st Ed., Gordon and Breach Science
Publishers, Amsterdam, The Netherlands (2000).

[4] Rogalski, A., “Infrared Detectors: an overview,” Infrared Physics &


Technology 43, 187–210 (2002).

[5] Hamamatsu, “Characteristics and use of infrared detectors,” Techni-


cal report SD-12, Hamamatsu (2004).

[6] Rogalski, A., “Infrared Photovoltaic Detectors,” Opto-electronics Re-


view 5, 205–216 (1997).

[7] Piotrowski, J. and Rogalski, A., High-Operating-Temperature In-


frared Photodetectors, SPIE Press, Bellingham, WA (2007) [doi:
10.1117/3.717228].

[8] Rogalski, A., “Infrared detectors: status and trends,” Progress in Quan-
tum Electronics 27, 59–210 (2003).

[9] Vurgaftmann, I., Meyer, J. R., and Ram-Mohan, L. R., “Band Parame-
ters for III-V Compound Semiconductors and their Alloys,” Journal of
Applied Physics 89(11), 5815–5875 (2001).
Optical Detectors 213

[10] Piprek, J., Semiconductor Optroelectronic Devices: Introduction to Physics


and Simulation , Academic Press, San Diego, CA (2003).

[11] Murdin, B., Adams, A., and Sweeney, S., “Band Structure and High-
pressure Measurements,” Ch. 2 in Mid-infrared Semiconductor Opto-
electronics , Krier, A., Ed., 93–126, Springer, Berlin (2006).

[12] Rogalski, A., Infrared Detectors, 2nd Ed., CRC Press, Boca Raton, FL
(2011).

[13] Palmer, J. M. and Grant, B. G., The Art of Radiometry, SPIE Press,
Bellingham, WA (2009) [doi: 10.1117/3.798237].

[14] van der Ziel, A., Noise in Solid State Devices and Circuits, Wiley, New
York (1986).

[15] van der Ziel, A., Noise: Sources, Characterization, Measurement ,


Prentice-Hall (1970).

[16] Gowar, J., Optical Communication Systems, Prentice-Hall, Upper Saddle


River, NJ (1984).

[17] Kingston, R. H., Optical Sources, Detectors, and Systems: Fundamentals


and Applications, Academic Press, San Diego, CA (1995).

[18] Kingston, R. H., Detection of Optical and Infrared Radiation , Springer,


Berlin (1978).

[19] Keyes, R. J., Optical and Infrared Detectors , Springer-Verlag, Berlin


(1980).

[20] Mandelbrot, B. B., Multifractals and 1/ f noise: Wild Self-Affinity in


Physics (1963–1976), Springer, Berlin (1999).

[21] D’Agostino, J. A. and Webb, C. M., “Three-dimensional analy-


sis framework and measurement methodology for imaging system
noise,” Proc. SPIE 1488, 110–121 (1991) [doi: 10.1117/12.45794].

[22] Ready, J., “Optical Detectors and Human Vision,” Ch. 6 in Fundamen-
tals of Photonics , Roychoudhuri, C., Ed., SPIE Press, Bellingham, WA
(2008) [doi: 10.1117/3.784938].

[23] Nyquist, H., “Thermal Agitation of Electric Charge in Conductors,”


Physical Review 32(1), 110–113 (1928) [doi: 10.1103/PhysRev.32.110].

[24] Budzier, H. and Gerlach, G., Thermal Infrared Sensors: Theory, Optimi-
sation and Practice, John Wiley & Sons, New York (2011).
214 Chapter 5

[25] Kruse, P. W., Uncooled Thermal Imaging Arrays, Systems, and Applica-
tions, SPIE Press, Bellingham, WA (2001) [doi: 10.1117/3.415351].

[26] Datskos, P. and Lavrik, N., “Uncooled Infrared MEMS Detectors,” Ch.
12 in Smart Sensors and MEMS , Yurish, S. Y. and Gomes, M. T. S. R.,
Eds., 381–430, Kluwer Academic Publishers, Dordrecht, The Nether-
lands (2003).

[27] Rogalski, A., “Progress in focal plane array technologies,” Progress in


Quantum Electronics 36, 342–473 (2012).

[28] Biberman, L., Ed., Electro-Optical Imaging: System Performance and Mod-
eling, SPIE Press, Bellingham, WA (2001).

[29] Vollmerhausen, R. H., Reago, D. A., and Driggers, R. G., Analysis and
Evaluation of Sampled Imaging Systems, SPIE Press, Bellingham, WA
(2010) [doi: 10.1117/3.853462].

[30] Bielecki, Z., “Readout electronics for optical detectors,” Opto-


electronics Review 12(1), 129–137 (2004).

[31] Scott, L. B. and Agostino, J. A., “NVEOD FLIR92 Thermal Imaging


Systems Performance Model,” Proc. SPIE 1689, 194–203 (1992) [doi:
10.1117/12.137950].

[32] King, K. F., Bernstein, M. A., and Zhou, X. J., Handbook of MRI Pulse
Sequences, Elsevier Academic Press, London, UK (2004).

[33] Rogatto, W. D., Ed., The Infrared and Electro-Optical Systems Handbook:
Electro-Optical Components, Vol. 3, ERIM and SPIE Press, Bellingham,
WA (1993).

[34] Li, C., Skidmore, G., Howard, C., Han, C., Wood, L., Peysha, D.,
Williams, E., Trujillo, C., Emmett, J., Robas, G., Jardine, D., Wan,
C.-F., and Clarke, E., “Recent development of ultra small pixel un-
cooled focal plane array at DRS,” Proc. SPIE 6542, 65421Y (2007) [doi:
10.1117/12.720267].

[35] Niklaus, F., Vieider, C., and Jakobsen, H., “MEMS-Based Uncooled
Infrared Bolometer Arrays — A Review,” Proc. SPIE 6836, 68360D,
Bellingham, WA (2007) [doi: 10.1117/12.755128].

[36] Song, W.-B. and Talghader, J. J., “Design and characterization of adap-
tive microbolometers,” J. Micromech. Microeng 16(5), 1073–1079 (2006)
[doi: 10.1088/0960-1317/16/5/028].
Optical Detectors 215

[37] Tissot, J., Trouilleau, C., Fieque, B., Crastes, A., and Legras, O., “Un-
cooled microbolometer detector: recent developments at Ulis,” Proc.
SPIE 5957, 59570M (2005) [doi: 10.1117/12.621884].

[38] Marshall, D. E., “A Review of Pyroelectric Detector Technology,” Proc.


SPIE 132, 110–117 (1978) [doi: 10.1117/12.956063].

[39] Vandermeiren, W., Stiens, J., Shkerdin, G., Kotov, V., Tandt, C. D., and
Vounckx, R., “Infrared Thermo-Electric Photodetectors,” Ch. 8 in Laser
Pulse Phenomena and Applications , Duarte, F. J., Ed., 143–164, Intech
Open, New York (2010).

[40] Kasap, S. O., Principles of Electronics Materials and Devices, 3rd Ed.,
McGraw-Hill, New York (2006).

[41] Rogalski, A., “HgCdTe infrared detector material: history, status and
outlook,” Reports on Progress in Physics 68, 2267–2336 (2005).

[42] Ashcroft, N. W. and Mermin, N. D., Solid State Physics, Harcourt Col-
lege Publishing, Fort Worth, TX (1976).

[43] Sze, S. M. and Ng, K. K., Physics of Semiconductor Devices , 3rd Ed.,
Wiley-Interscience, New York (2007).

[44] Yu, P. Y. and Cardona, M., Fundamentals of Semiconductors: Physics and


Materials Properties, Springer, Berlin (1996).

[45] Rezende, S. M., Materiais e Dispositivos Eletrônicos, Livraria da Física,


São Paulo (2004).

[46] Ibach, H. and Lüth, H., Solid-State Physics: An Introduction to Principles


of Material Science, 4th Ed., Springer, Berlin (2009).

[47] Jackson, K. A. and Schröter, W., Handbook of Semiconductor Technology ,


Wiley-VCH, Berlin (2000).

[48] Rockett, A., The Materials Science of Semiconductors, Springer, Berlin


(2008).

[49] Tilley, R., Understanding Solids, John Wiley & Sons, New York (2004).

[50] Neamen, D. A., Semiconductor Physics and Devices, 3rd Ed., McGraw-
Hill, Boston (2003).

[51] Nag, B. R., Physics of Quantum Wells Devices , Kluwer Academics Press,
Dordrecht, The Netherlands (2001).
216 Chapter 5

[52] Yacobi, B. G., Semiconductor Materials: An Introduction to Basic Prin-


ciples, Kluwer Academic Publishers, Dordrecht, The Netherlands
(2004).

[53] Fox, M., Optical Properties of Solids, Oxford University Press, Oxford,
UK (2001).

[54] Schneider, H. and Liu, H. C., Quantum Well Infrared Photodetectors,


Springer, Berlin (2007).

[55] Dresselhaus, M. S., “Solid State Physics (Four Parts),” http://web.


mit.edu/afs/athena/course/6/6.732/www/texts.html.

[56] Schubert, E. F., Light-Emitting Diodes, Cambridge University Press,


Cambridge, UK (2003).

[57] Hecht, E., Optics, 4th Ed., Addison Wesley, Boston, MA (2002).

[58] Chur, J. and Sher, A., Physics and Properties of Narrow Gap Semiconduc-
tors, Springer, Berlin (2008).

[59] Huag, H. and Koch, S. W., Quantum theory of the Optical and Elec-
tronic Properties of Semiconductors , 4th Ed., World Scientific, London,
UK (2004).

[60] Toyozawa, Y., Optical Processes in Solids, Cambridge University Press,


Cambridge, UK (2003).

[61] Bauer, G., “Determination of Electron Temperatures and of Hot Electron


Distribution Functions in Semiconductors,” Ch. 1 in Solid-State Physics ,
Höhler, G., Ed., 1–106, Springer, Berlin (1974).

[62] Vasileska, D., “Physical and Mathematical Description of the Opera-


tion of Photodetectors,” http://nanohub.org/resources/9142.

[63] Rogalski, A., Antoszewski, J., and Faraone, L., “Third-generation in-
frared photodetector arrays,” Journal of Applied Physics 105(9), 1–44
(2009) [doi: 10.1063/1.3099572].

[64] Horn, S. B., “Cryogenic cooling options for forward looking infrared
(FLIR),” Proc SPIE 0245 (1980) [doi: 10.1117/12.959339].

[65] Lerou, P.-P. P. M., Micromachined Joule–Thomson cryocooler, PhD thesis,


University of Twente, Enschede (February 2007).

[66] TE Technology, “Thermoelectric Coolers: FAQ & Technical Informa-


tion,” http://www.tetech.com/FAQ-Technical-Information.html.
Optical Detectors 217

[67] Boyd, R. W., Radiometry and the Detection of Optical Radiation , John
Wiley & Sons, New York (1983).

[68] Quimby, R. S., Photonics and Lasers: An Introduction , Wiley-


Interscience, New York (2006).

[69] Ossicini, S., Pavesi, L., and Priolo, F., Light Emitting Silicon for Mi-
crophotonics, Springer, Berlin (2003).

[70] Pyradi team, “Pyradi data,” https://code.google.com/p/pyradi/


source/browse.

[71] Daniels, A., Infrared Systems, Detectors and FPAs, 2 Ed., SPIE Press
(2010).

[72] Rogalski, A., “Material considerations for third generation infrared


photon detectors,” Infrared Physics and Technology 50, 240–252 (2007).

[73] Alves, F. D. P., Santos, R. A. T., Amorim, J., Issmael, A. K., and
Karunasiri, G., “Widely Separate Spectral Sensitivity Quantum Well
Infrared Photodetector Using Interband and Intersubband Transi-
tions,” IEEE Sensors Journal 8(6), 842–848 (2008).

[74] Santos, R. A. T., Alves, F. D. P., Taranti, C. G. R., Filho, J. A., and
Karunasiri, G., “Quantum wells infrared photodetectors: design us-
ing the transfer matrix method,” Proc. SPIE 7298, 72980B (2009) [doi:
10.1117/12.820251].

[75] Martyniuk, P. and Rogalski, A., “Quantum-dot Infrared Photodetec-


tors: status and outlook,” Progress in Quantum Electronics 32(3–4), 89–
120 (2008).

[76] Ting, D. Z., Hill, C. J., Soibel, A., Nguyen, J., Keo, S. A., Lee,
M. C., Mumolo, J. M., Liu, J. K., and Gunapala, S. D., “Antimonide-
based barrier infrared detectors,” Proc. SPIE 7660, 76601R (2010) [doi:
10.1117/12.851383].

[77] Razeghi, M., Hoffman, D., Nguyen, B.-M., Delaunay, P.-Y., Huang,
E. K., Tidrow, M. Z., and Nathan, V., “Recent Advances in LWIR Type-
II InAs/GaSb Superlattice Photodetectors and Focal Plane Arrays at
the Center for Quantum Devices,” Proceedings of the IEEE 97, 1056–
1066 (2009) [doi: 10.1109/JPROC.2009.2017108].

[78] Pyradi team, “Pyradi Radiometry Python Toolkit,” http://code.


google.com/p/pyradi.
218 Chapter 5

Problems

5.1 What choice of bias resistance R L yields optimal detectivity for


a photoconductive detector? Three different approaches can be
used: maximum power in the load, maximum change in power in
the load, or maximum responsivity. What do you think should be
the optimal choice for R L , given a detector resistance of Rd ? [3]
5.2 Ignoring the material properties, calculate the spectral responsiv-
ity values for a photovoltaic detector at wavelengths of 0.5, 1, 2.5
5, and 10 µm for quantum efficiencies of 0.5 and 1.0. [2]
5.3 Repeat the previous calculation for the detector materials listed in
the following table. In the table the material energy gap is given
in electron volts. One electron volt (1 eV) is the amount of energy
that an electron acquires when it is accelerated through a potential
difference of 1 volt. 1 eV is therefore 1.6 × 10−19 joule.

Material Eg Material Eg
Cds 2.4 PbS 0.42
CdSe 1.8 PbSe 0.23
GaAs 1.35 InSb 0.23
Si 1.12 HgCdTe 0.1
Ge 0.67
eV eV
5.4 This is an extension of the previous problems: draw the spectral
response of the various detectors accurately to scale on the same
graph, ranging from 0–14 µm. There is no need to do extensive
calculations and plots, just plot a few key points on the graph.
Use Matlab® or Python™ to get numerical solutions. [4]
5.5 Do a conceptual design of a two-layer sandwich detector that must
cover the spectral region from 2.5–14 µm. Describe which mate-
rials must be used in each layer. Which layer must be in front?
Draw the spectral response of the combined sandwich, for each of
the two layers, on the same graph. Note that the two detectors are
electrically independent but not optically independent! [6]
5.6 Calculate the response time of a silicon-pin diode with capacitance
of 5 pF, series resistance of 100 Ω, and negligible inductance and
leakage current in each of the four bias modes. [4]
5.7 Calculate the transit time of carriers through the silicon detector
described in Section 5.9.4. [2]
Optical Detectors 219

5.8 Verify Equation (5.143). Make sure that you understand why the
noise currents seem to add together. [2]
5.9 Determine the spectral responsivity of a silicon detector over the
spectral range 0.25–1.2 µm. Use the spectral absorption coefficient
data on the pyradi website. 78 To simplify the calculation you may
approximate the absorption coefficient by two or three straight-
line segments between 0.25–1.2 µm.
Consider the following two depletion layer designs: (1) d1 = 30 µm,
d2 = 60 µm, and (2) d1 = 4 µm, d2 = 34 µm. In both cases the de-
pletion depth is 30 µm, but in the first case the depletion region is
deep inside the detector, whereas the second is close to the surface
of the detector.
Which design is more suited to the detection of IR laser pulses at
0.9 µm? Which design is more suited to the detection of blue or
ultraviolet laser pulses at 0.33 µm? How would you design a sili-
con detector for optimum responsivity at a particular wavelength?
[8]
5.10 Compare the equations for D ∗ between the photoconductive and
photovoltaic cases. [2]
5.11 Use Equations (5.145) and (5.154) (or derive your own) to confirm
any three points in Figure 5.29. [6]
5.12 Use the spectral absorption coefficient data on the pyradi web-
site 78 to determine the coefficients for all of the materials, fitting
Equations (5.78), (5.79), and (5.80). [10]
5.13 Review Section 9.7 and then recalculate Figure 5.29 for a conical
background FOV with an half-apex angle of 15 deg. The cold
shield is at 77 K. [10]
5.14 A popular tea-time discussion topic involves the timing of when
to add the milk to the tea. Given a time lapse T, which cup will be
warmer: the cup with milk added at t = 0, or the cup with milk
added at t = T? Develop a mathematical model for the problem
and find an answer. Ignore heat conduction through the base of
the cup (the tea is served in styrofoam cups). Assume a constant
heat capacity of 4.15 kJ/(K·k). Plot the temperature of the tea
versus time for three cases: (a) milk at t = 0, (b) milk at t = T, and
(c) no milk. [8]
5.15 Derive Equation (5.34) in full, with motivation for the formulation
and derivation. [2]
5.16 Design a bolometer thermal detector element for use in an image
array operating at a frame rate frequency of 20 Hz. The detector
220 Chapter 5

element will be used in an array of 512 × 512 elements with size


100 µm × 100 µm. Explain how your design optimizes perfor-
mance. [4]
5.17 Photon emission from a blackbody follows the Poisson probabil-
ity distribution. Calculate the SNR (see Section 7.1.3) inherent in
blackbody radiation as a function of temperature.
5.18 Show by mathematical derivation and by plotting the result how
a cold shield will increase a detector’s D ∗ by 1/ sin θ. [6]
5.19 Calculate and plot the thermal-fluctuation-noise PSD over the fre-
quency range 100 to 104 Hz, for C = 4 × 10−9 J/K, G = 10−7 W/K
at temperatures of 100 K, 300 K, and 500 K. What is the time con-
stant? Comment on the results. [4]
5.20 Use Equation (5.56) to calculate and plot the D ∗ of a band-limited
thermal detector for wavelengths from 1 to 103 µm. Consider
(a) no filter, (b) only the lower-wavelength cutoff, (c) only the
longer-wavelength cutoff, and (d) cutoff at both wavelengths. Also
plot the D ∗ for a photon detector with no filter. Comment on your
results. [6]
5.21 Develop a mathematical model for the internal quantum efficiency
of a photoconductive detector as a function of depth of the device.
Determine the thickness of the detector in terms of the absorp-
tion coefficient for an internal quantum efficiency of 50% and 95%.
Implement the model in software and plot the internal quantum
efficiency as a function of detector depth. [6]
5.22 Derive an equation for photon detector responsivity in terms of
photon flux. Do a detailed dimensional analysis and investigate
the units of the photon-flux responsivity. [2]
5.23 Calculate and plot the spectral responsivity for a silicon, a germa-
nium, an InSb, and a HgCdTe detector. [6]
Calculate the spectral D ∗ for the above detectors, for background
limited operation when (1) immersed in 300 K background and
(2) facing vertically up, looking through a 300-K atmosphere into
space. The detector is equipped with a cold shield (Section 9.7)
with half-apex angle of 10 deg. Comment on your observations.
[6]
The data for this problem is is given in the DP04.zip data file on
the pyradi website. 70
Chapter 6
Sensors

The soul never thinks without a picture.


Aristotle

6.1 Overview

This chapter provides an introductory overview of sensors. The analysis is


limited to small-angle (paraxial) optics. The purpose is to equip the reader
to do a first-order design in the system context of this book. Detailed
sensor design is beyond the scope of this chapter and indeed not required
for this text. Fundamental to the sensor concept is the geometry of solid
angles and how these are effected in the sensor. The second important
element is the conversion of optical energy into electrical energy, including
the effect of noise.
The path by which a ray propagates through an optical system can
be mathematically calculated. The sine, cosine, and tangent functions are
used in this calculation. These functions can be written as infinite Taylor
series, i.e., sin( x) = x − x3 /3! + x5 /5! − x7 /7! + · · · . The paraxial approxi-
mation only uses the first term in the sum sin( x) ≈ tan( x) ≈ x, cos( x) ≈ 1.
The paraxial approximation is valid only for rays at small angles with and
near the optical axis. The paraxial approximation can be effectively used
for first-order design and system layout despite the small-angle limitations.
The coverage of detectors and noise in this chapter is, similarly, only
a brief introduction: sufficient detail is given to support first-order design
and modeling.

6.2 Anatomy of a Sensor

The electro-optical system performs two functions. First, the optics focus
the ‘object’ (the thing looked at) into the ‘image’ (a geometrically scaled
representation of the object). 1–6 Second, the sensor collects and converts

221
222 Chapter 6

Aperture stop Field stop

Stray light baffles Filter

(Imaginary)
Optical axis

Window Optical Optomechanical Detector


elements mounting rings Optical
barrel
Figure 6.1 Anatomy of a sensor.

optical energy into a useable signal for subsequent use. In the system
chain, the sensor is the third building block, after the source and medium.
Figure 6.1 shows a conceptual representation of an optical sensor, defining
some terminology for use in this chapter.
The optical flux enters from the left through a window. The window
is normally a transparent optical material; e.g., BK7 glass for visual-band
systems or ZnSe for infrared systems. The window will probably have
a thin, multilayer anti-reflection coating to reduce optical reflections from
the window surfaces (see Section 3.4.3). Infrared window materials may
have a hard coating to protect the soft material of the window against
scratches.
The aperture stop is often designed as a mechanical device located
such that vignetting is reduced or eliminated. Some systems do not have
a dedicated aperture stop, and the diameter of one of the lenses performs
the function.
The optical elements (windows, lenses, and mirrors) can be divided
into two categories: elements with optical power and elements without
optical power. Surfaces without power (plano surfaces) have no focusing
effect, whereas surfaces with power (nonplanar surfaces) affect the diver-
gence or convergence of optical rays. The optical elements are held in
place in the optical barrel by mounting rings, locating the elements accu-
rately in separation and lateral displacement. Some form of strain relief is
required to prevent the optical elements from breaking at temperature ex-
tremes — a typical strain relief is a rubber spacer on a noncritical surface.
Sensors 223

Optical elements can also be bonded together or glued to metal mounting


structures.
Stops or baffles are sometimes used to prevent or minimize internal
stray light reflections. The baffles are normally made of thin metal plates
spaced at regular intervals. These baffles are covered in an absorbing coat-
ing to absorb rather than reflect the stray light. A commonly-used device
called a light trap consists of two absorbing plates at a small angle with
each other. Any light entering the trap reflects down the two plates, being
absorbed at each reflection (see Section 3.2.5).
Some optical sensors have spectral filters that selectively spectrally
transmit the optical flux. The filter, together with the detector spectral
response, defines the spectral response of the sensor. At its opaque wave-
lengths, a filter emits thermal radiation, as does any other object.
The field stop, if present, can be placed at any location in the optical
system where an image is formed (see the next section). For simple lenses,
this is typically near the back focal plane. For systems with internal focal
planes (e.g., the Gregorian telescope at the bottom of Figure 6.10), the field
stop can be located at such an internal focal plane. If a detector is used,
the boundary of the active detector area effectively defines the field stop
because the detector has zero responsivity outside the detector active area.

6.3 Introduction to Optics

6.3.1 Optical elements

Optical elements include lenses, mirrors, windows, prisms, filters, and


similar items. Of particular interest here are optical elements with optical
power. The optical power in a curved optical element refracts or reflects
the optical rays so as to converge or diverge the rays. Optical power can
be used to form an image of an object in a plane called the focal plane.
Figure 6.2 shows the key characteristics of a lens. Assuming unity index
of refraction in the object and image planes, the lens equation relates the
object properties and image properties with the following equations:
1 1 1
= + , (6.1)
s s f
x x = − f 2 , and (6.2)
h s sin θ
m = = = , (6.3)
h s sin θ 
224 Chapter 6

where s is the image distance, s is the object distance, f is the focal length,
h is the image height, and h is the object height. The distances s and s are
positive measured toward the right and negative measured toward the left
(the same as x in the Cartesian coordinate system).
The optical axis is the ‘centerline’ of the optical system, normally
defining zero field angle. Optical systems have ‘planes’ along the optical
axis with special properties. Figure 6.2 shows a system with two principal
planes P1 and P2 , and two focal planes F1 and F2 . These planes are ide-
ally plano (flat) and perpendicular to the optical axis. In real-world optics,
these ‘planes’ are not flat but could be any shape, depending on the de-
sign. In a simple optical system the principal and focal planes tend to be
near-spherical. More-sophisticated systems are designed to achieve near-
flat focal planes because detector surfaces are flat. The planes are often
rotationally symmetric and centered on the optical axis, but this depends
on the design of the system. In this chapter, the term ‘plane’ is used to
denote an optical plane (which is normally not mathematically flat).
A ray passing through the first F1 (front) or second F2 (back) focal
points will be parallel to the optical axis after traversing the lens. A ray
passing through the first or second principal points P1 or P2 will traverse
through the lens with no change in direction. An object at location H1 will
be imaged to the image location at H2 (in the paraxial approximation).
The location of H1 (at distance s) and H2 (at distance s ) are called
conjugates of each other. ‘Infinite conjugates’ is a special but common
case: when the object is located at infinity s → −∞, then s = f , and the
image is formed in the back focal plane of the lens or telescope. Most
real-world optical systems operate at near-infinite conjugates because the
object is normally located a substantial distance from the optical system.
The marginal ray is the ray from the optical axis at the object or image,
through the edge of the pupil. The chief ray is the ray from the edge of
the field stop, through the principal points of the optics. The chief ray
propagates through the lens with no change in direction. The field angle
is defined by the chief ray, as the angle defined the image height h and s
or the object height h and s.
A real optical system has the first and second principal points at sep-
arate locations, as shown in Figure 6.2(a). For first-order studies the op-
tical system is often simplified to the ‘thin-lens’ approximation shown in
Figure 6.2(b). In the thin-lens approximation the front and rear principal
points coincide. The principal points and focal points all lie in planes,
perpendicularly intersecting the optical axes at P, F1 , and F2 .
Sensors 225

P1, P2 are the principal points


F1, F2 are the focal points
H1, H2 are object/image location
Object s>0
M
ar s<0
gin
h al
Field ra
y
angle
H1 F1 P1 P2 F2 H2 h
Chief r
Optical system ay
Thick lens
Image
Front principal
Back principal
‘plane’ is curved
‘plane’ is curved
x f f x
s 
s
(a)

Ma
Object Ray parallel to optical axis rgi
na
θ Ray th l ra
rough y
h Field
center
of lens
angle

H1 2F F P Chief r
ay F θ H2 2F h
Thin lens
Approximation
Ray parallel to optical axis Image

(b)
Figure 6.2 Object and image relationship in an image: (a) thick lens and (b) thin-lens ap-
proximation.

6.3.2 First-order ray tracing

Figure 6.2(a) shows the imaging relationship in a complex lens. In the


paraxial (thin-lens) approximation, the optical system is simplified as shown
in Figure 6.2(b). Consider now a few applications of first-order paraxial
ray tracing through the optical system. Figure 6.3 shows only three cases,
there are many more. From Equation (6.1) it follows that if s = −2 f , then
s = 2 f and m = −1, leading to the top picture in Figure 6.3. This means
that an object placed at twice the focal length of a lens will image to twice
the focal length behind the lens, with a magnification of 1, but form an
inverted image.
Increasing to s = −3 f leads to s = 3 f /2 and m = −0.5. Note the
trend: for (negatively) increasing s, the image moves closer to F, and the
magnification decreases. In the limit for s → −∞, s → f and m → 0.
Hence, as the object moves further away from the lens, the image ap-
226 Chapter 6

Ray parallel to optical axis


h
Ray th
rough
center
of lens

2F F P F 2F

h

Ray parallel to optical axis


h
Ray thro
ugh cente
r of lens

3F 2F F P F 2F
h

Ray parallel to optical axis


h
Ray through cen
ter of lens

h 2F
3F 2F F P F

Figure 6.3 First-order ray tracing.

proaches the back focal plane F of the lens.


The results in Figure 6.3 can be constructed geometrically by two sim-
ple rules: first, a ray entering a lens parallel to the optical axis always
passes through the rear focal plane of the lens. Second, a ray passing
though the center (principal points) of the lens does not change its direc-
tion. Any point on the object can be mapped by these rules to a point on
the image. Likewise, the inverse also applies.
The application of these simple rules leads to the sensor model defi-
nition shown in Figure 6.4, where the detector is located in the focal plane
of the lens, imaging an object at infinity. The sensor field of view (FOV) is
defined by the two chief rays through the center of the lens.

6.3.3 Pupils, apertures, stops, and f -number

An optical aperture, pupil, or a stop is a well-defined opening in an optical


system with the purpose of limiting the amount of light, to reject stray
light, or to clearly define the field of view. This well-defined opening
Sensors 227

α Sensor field of view as


defined by detector area

A0
Ch A1
M ief
ray
β ar
gin 2 Detector
al M area
ra ar
y2 gi
na
lr
ay
Chie 1
f ray a
1
¥

Object plane Optics plane Image plane

Figure 6.4 Sensor field of view defined by optical and detector parameters.

could be fixed (e.g., a metal plate cut to specific size and shape) or it could
be variable (e.g., a variable iris for which the diameter of the opening can
be adjusted). Figure 6.5(a) shows the apertures and stops in a real lens.
In the thin-lens approximation, the entrance pupil falls in the plane of the
thin lens, as shown in Figure 6.5(b).
Optical systems often suffer from unintended stray light when observ-
ing high-contrast scenes. The light could be reflected from optical surfaces
or from the inside of the optomechanics. Stray light can be minimized by
locating nonreflective stops or baffles so as to suppress the internal reflec-
tions.
A stop located in an image plane is called a field stop because this stop
limits the field of view of the system. Some systems have more than one
image plane (i.e., a telescope or microscope). By definition, these image
planes are images of each other. The field stop can thus be located at any
image plane and still have the same effect.
The pupil of an optical system is a plane that limits the amount of
light flowing through the system. The lens diameters limit the amount of
light passing through each lens, but because each lens limits the light in
a different ‘plane’ of the system, the light control is not very precise and
can lead to vignetting when some optical elements ‘shade’ other optical
elements. Vignetting is the effect where one or more optical elements limit
the amount of light nonuniformly over the image plane. The usual effect is
228 Chapter 6

Entrance Exit
pupil Pupil pupil
Field
stop

l ray
Margina
H
Object at infinity F2
D θ
s → −∞ F1 P1 P2 f

Principal plane for


well-corrected optics

(a)

Marginal ray

D θ
f F2
Principal plane for
thin-lens approximation

(b)
Figure 6.5 Stops, numerical aperture, and f -number: (a) complex (thick) lens and (b) thin-
lens approximation.

that some (or all) edges of the image are darker than the center of the im-
age. Severe vignetting may even cut off portions of the image completely,
e.g., when the eye is not well aligned with the optical axis of a telescope.
One way to control vignetting is to accurately size the different optical
elements appropriately. A better way to control vignetting is to make all
of the elements slightly bigger than necessary and then to introduce a
mechanical element (aperture stop) to limit the light in a single plane in
a controlled manner. This approach also allows setting the aperture to
different diameters, such as setting the aperture f -stop in a photographic
camera.
The single element that limits the amount of light flow in the optical
system is called the pupil. The pupil is normally a physical device inside
the optics. The optical elements before the pupil image the pupil into
Sensors 229

object space into the ‘entrance pupil.’ Likewise, the optical elements after
the pupil image the pupil into image space into the exit pupil, i.e., seen
from the image, this is the diameter of the optical system. The marginal
ray in an optical system is the ray touching the edge of the pupil. In the
thin-lens approximation, the entrance and exit pupil coincide in the thin
lens and have the diameter of the thin lens, as shown in Figure 6.5.
The f -number (f /# or F#) is a geometric construct comprising the di-
ameter of the exit pupil and the focal length; it has no other meaning. The
f -number is a convenient means to denote the amount of optical flux flow-
ing through an optical system of a given focal length. The f -number of the
optical system is defined as f /# = f /D for a system at infinite conjugates,
where f is the focal length, and D is the diameter of the exit pupil. In
photography the term f -stop is used to denote the size of a lens entrance
pupil, in a progression of doubling areas for each successive f -stop. f -
stop values are f -numbers, usually in a series of the form f /1.4, f /2, f /2.8,
f /4, f /5.6, f /8,... The optical systems encountered in this book are fixed
f -number systems, and the f -number is only used to denote the ratio of
focal length to diameter. Higher f -numbers result in lower irradiance on
the focal plane. Systems with lower f -numbers (higher image-plane irra-
diance) become increasingly difficult to design with high optical perfor-
mance, f /1.2 being a reasonable limit for good imaging quality. Generally
speaking, a low-f -number lens is more complex and more expensive than
a high-f -number lens.
The f -number is only defined for a system at infinite conjugates. For
systems not at infinite conjugates, the ‘numerical aperture’ is used as a
measure of the amount of flux flowing through the system at a given focal
length. The numerical aperture is given by NA = n sin θ, where n is the
index of refraction of the medium in which the lens is located, and θ is
the angle of the marginal ray in image space, as shown in Figure 6.5. For
lenses in air, n = 1. The numerical aperture is not constrained to infinite
conjugates, and hence it can vary depending on the location of the im-
age. The numerical aperture should not be estimated from the size of the
aperture or lens diameter but from the inclination of the marginal ray (i.e.,
from the optical design).
At infinite conjugates the f -number and numerical aperture in image
space are related by
 
D D
NA = n sin θ = n sin arctan ≈n , (6.4)
2f 2f
and hence
n
F# = . (6.5)
2 NA
230 Chapter 6

This approximation is good for well-corrected optical systems. Why


is a sine function and not a tangent function used in this equation? The
reason is that the principal plane is not a mathematical plane, but in well-
corrected systems, near-spherical, centered at F, as shown in Figure 6.5.
The thin-lens approximation to f -number is therefore incorrect, and the
numerical aperture is more meaningful.
In terms of flux collection, the numerical value of the f -number could
be deceiving because it does not account for optical transmittance or ob-
scuration in the optical pupil. A lesser-used construct is the t-number or
t-stop, which is used to account for the transmittance or obscuration losses
of an optical system. An optical system with such losses will have a t-
number higher than the equivalent f -number for the same lens with no
loss.

6.3.4 Optical sensor spatial angles

Following from the previous sections, it should be clear that there are two
solid angles of concern in an optical sensor. These two solid angles are
indirectly related to each other but have very little to do with each other.
This section explores the field of view (FOV) solid angle and the image-
flux-collecting solid angle, as depicted in Figure 6.6.
The flux-collecting property of an optical system is determined by
the diameter of the pupil (the lens diameter) and is quantified by the nu-
merical aperture or the f -number of the system. These two numbers are
determined only by the marginal ray in the system. The bottom picture
in Figure 6.6 shows the optical system and the marginal ray touching the
pupil. It is evident from the bottom picture that the flux within the pupil
is focused by the lens and collected in the detector on the focal plane.
The irradiance in a point on the focal plane is determined only by the f -
number/numerical aperture/marginal ray. The size of the detector has no
direct effect on the flux-collecting performance of the optical system. The
chief ray maps the field stop into object space, thereby determining the
sensor’s FOV. The size of the pupil has no direct effect on the FOV of the
system.
Consider the top picture in Figure 6.6, the flux collected by the optical
system: not all is collected within the field stop. The field stop blocks the
flux outside of the desired field of view — this flux does not reach the focal
plane. The sensor is only sensitive to flux within the field stop. The chief
ray, determined by the size of the field stop and s , therefore determines
the sensor’s FOV. The size of the pupil has no direct effect on the FOV of
the system.
Sensors
The field stop (detector size) and distance
Object between the pupil and the image plane
Field determine the angle of the chief ray and
Focal
plane Field plane
stop hence field of view of the sensor stop
image

A Chief ray
A’
Chief ray
Detector responds Detector size
only to flux from this area
Figure 6.6 Optical sensor spatial angles.

Detector does not Flux from the object falling


respond to flux here, has no effect on the
from this area detector signal

s s’

The pupil size and distance between


the pupil and the focal plane determine Pupil
the angle of the marginal ray and hence
the flux collected by the sensor AOptics
Object Focal
plane Marg
ray
inal r
ay plane
Mar ginal

dA dA’

Flux radiated by the object Lens collects flux Lens focuses flux from In this solid angle the
outside the pupil solid angle from the object the lens in this solid detector responds to flux
is not collected by the sensor in this solid angle angle on the detector from the sensor housing

231
s s’
232 Chapter 6

There is a symmetry between the flux-collecting solid angle and the


FOV: both are determined by an aperture or a stop and the value of s .
The common denominator is s , but the commonality ends there. The
FOV is determined by the field stop (size of the detector), whereas the
flux-collecting solid angle is determined by the pupil or aperture stop.
Section 6.8 describes a parameter called the system throughput, which
considers the combined effect of the two solid angles.

6.3.5 Extended and point target objects

For the discussion in this section assume a uniform radiance over an ob-
ject’s area. If the object is smaller than the FOV, the object can be con-
sidered a point target in terms of the sensor FOV: the object’s size has no
effect on its signal in the FOV. If an object is larger than the FOV, it means
that the full sensor FOV is filled by the object (called an extended target or
object in terms of the pixel size) because the object area is bigger than the
sensor FOV. The flux contributed by an extended object is defined by the
size of the footprint of the sensor FOV on the object and not by the object
size itself. See also Section 7.6.

6.3.6 Optical aberrations

Ideal, paraxial optical systems form perfect images; but practical optical
systems do not because of optical aberrations. 1–5,7 Optical aberrations are
not only the result of poor workmanship or poor quality control but also a
mathematically founded physical process. Elements with optical power
(lenses or curved mirrors) refract or reflect optical rays toward a focal
point; but a single point on the object may not always image to a single
point in the focal plane.
Geometrical aberrations 8 occur as a result of the geometry of the sur-
faces. There are huge families of physical aberrations, including several
variations of spherical aberration, coma, astigmatism, and distortion. Fig-
ures 6.7 and 6.8 depict only the first-order aberrations. Spherical aber-
rations arise when rays distant from the optical axis focus in a different
plane as do paraxial rays. As a result, there is a plane located at the ‘circle
of least confusion’ that provides the smallest optical spot size. Chromatic
aberration occurs as a result of the index of refraction in lenses not be-
ing constant for different colors. This means that blue light does not focus
where green light focuses, and not where red light focuses. Astigmatism is
an aberration when the optical power in the vertical and horizontal planes
differ (e.g., cylindrical deviations from a spherical shape). Comatic aber-
ration occurs when an off-axis spot is focused in different locations in the
Sensors 233

Lens
Paraxial
Circle of least ray focus
confusion

Transverse
spherical aberration
Longitudinal
Marginal
spherical aberration
ray focus

(a)

Lens

Red ray focus Blue


Red

Blue ray focus

(b)

Smeared in vertical Smeared in horizontal


direction direction

Focal radial line


for vertical rays
Focal radial line for
horizontal rays

Greater optical power in horizontal plane, giving


shorter focal length than in the vertical plane

(c)
Figure 6.7 Optical aberrations: (a) first-order spherical aberration, (b) chromatic aberration,
and (c) astigmatism.
234 Chapter 6

Object plane

Lens

Image plane

(a)

Planar
object Lens

Curved
image

(b)

Undistorted Pincushion Barrel


image distortion distortion

(c)
Figure 6.8 Optical aberrations: (a) first-order coma, (b) field curvature, and (c) distortion.
Sensors 235

focal plane by different parts in the lens. Coma gets its name from the
comet-like shape of the spot. Field curvature is when the image of a plane
in the object space is imaged to a curved surface. A focal surface with
constant distance from the rear principle point (a simple lens) will have a
spherical field curvature. Distortion arises when constant distances in the
object plane are no longer constant in the image plane. The point spread
function of off-axis aberrations (i.e., coma, astigmatism, and distortion) are
not rotationally symmetric.
The main purpose of optical design is to create and employ degrees
of freedom in the optical design so as to minimize the system’s aberra-
tions. These degrees of freedom include the number of optical elements,
curvature of surfaces, thickness between surfaces, and index of refraction.
The optical designer adjusts all of the degrees of freedom in the design
to balance the optical aberrations against each other so as to cancel out,
thereby providing the required optical performance. Optical design is a
very sophisticated topic beyond the scope of this book.

6.3.7 Optical point spread function

The image sharpness of an optical system is described by the impulse


response of a focused imaging system — this is also known as the point
spread function (PSF) of the system. The mathematical Dirac delta impulse
function can be conveniently modeled in optical terms by the infinitely
small size of a distant star (assuming no atmospheric degradation), and
the PSF is the image of the star. The PSF is a two-dimensional variable in
the image plane.
Even a perfect optical system will have a non-Dirac delta impulse re-
sponse, stemming from the physical diffraction of the light wave in the
aperture. The impulse response of a clear circular aperture is called the
Airy pattern, 8 described mathematically using Bessel functions.
A practical optical system will have an impulse response compris-
ing the diffraction effect, optical aberration effects, and aberrations due to
manufacturing and optical element mounting. A lens design with neg-
ligible optical aberration is called a ‘diffraction limited design’ because
diffraction is the main contributor to the optical PSF.
The optical PSF can be calculated from diffraction and aberration pa-
rameters. The optics’ contribution to the PSF can be calculated by a tech-
nique called ray tracing, where the paths of large numbers of rays are
calculated through the optical system. The PSF generally varies across the
image: it is not a constant with change in field angle. From linear systems
236 Chapter 6

Cooke Triplet Double Gauss Telephoto

Figure 6.9 Commonly used refractive optical systems.

theory it can be shown that the image in the focal plane of an optical sen-
sor will be the convolution of the PSF with the ‘ideal’ image in the absence
of diffraction and aberrations.
The two-dimensional Fourier transform of the optical PSF yields the
optical transfer function 8 (OTF), a complex-value optical frequency re-
sponse. The modulation transfer function (MTF) of the optical system is
the magnitude of the OTF. The MTF of a system with aberrations is not al-
ways rotationally symmetric. MTF is commonly used for the specification,
design, and testing of lenses.

6.3.8 Optical systems

Most optical systems consist of a number of optical elements. These el-


ements are carefully shaped and located to achieve the required optical
performance. Some typical lens designs are shown in Figure 6.9. Note
the large differences between the designs in this figure: every lens de-
sign is unique and different from any other lens. Some lenses provide the
required performance with only a few elements, whereas other lenses re-
quire a large number of elements to meet the requirements. Some low-cost
applications, such as cameras in mobile phones, allow for large distortion
in the optical design and correct the distortion by subsequent image pro-
cessing.
Systems with long focal lengths may become very long and might
not fit into the available space. A very convenient solution is to ‘fold’
the system with mirrors, resulting in a shorter and more compact sys-
tem. Figure 6.10 shows a Cassegrain folded optical system. This approach
yields shorter and more-compact solutions than the equivalent unfolded
design. Another folded optics system is the reflective Gregorian afocal
telescope. Afocal means that the optical system does not have a focal
length; it does not focus an object to an image plane. The Gregorian tele-
scope has folded optics, but it is not necessarily shorter than the refractive
equivalent. Reflective optics may yield shorter optical systems but carry
Sensors 237

Secondary Primary
mirror (flat) mirror

Cassegrain folded optics

Field
stop

Secondary Primary
mirror (curved) mirror

Gregorian afocal telescope

Figure 6.10 Reflective optics.

several disadvantages, including central obscuration, manufacturing, and


packaging difficulties.

6.3.9 Aspheric lenses

The optical power in an optical surface stems from the curvature of the op-
tical surface. In principle, any curved surface can produce optical power,
as is evidenced by studying common, household glass items. Most optical
elements were traditionally manufactured with spherical profiles because
such profiles are easily formed by conventional lapping and polishing tech-
niques. There is also a class of surfaces called aspherical surfaces with
useful optical properties.
The profile of an aspheric lens is neither spherical, cylindrical, nor flat.
During optical design the surface of an aspherical element can be specified
such that it provides optical power, but at the same time minimizes or
corrects optical aberrations caused by other elements. In particular, the
aspherical surface reduces or eliminates spherical aberration. This means
that fewer optical elements are required, and the lens assembly is simpler
and lighter. Aspherical elements are more difficult to manufacture than
spherical optics and are thus used where the total system solution provides
benefits such as weight, complexity, or cost.
238 Chapter 6

R01 (distance to source) Sensor focused at


infinite conjugates

Object at Object
long range image

Collimator

fc
Col
l
foca imator Object
l len
gth
Object Collimator Sensor

dc
ds
fc Collimated
(parallel) rays Object
fs
image

Figure 6.11 Laboratory collimator and long range configurations.

6.3.10 Radiometry of a collimator

Many sensors are focused at infinite conjugates, hence there is need for an
instrument that can create infinite object images within the limited confines
of the laboratory. A collimator is simply an element with optical power
(lens or mirror), focusing an object at infinity. Because the image is located
at infinity, the optical rays are parallel to the optical axis (collimation).
The setup can be depicted as shown in the top picture in Figure 6.11.
In terms of the flux transfer defined in Equation (2.31), the object range
R01 must be replaced by the collimator focal length. The sensor is focused
at infinity and thus can image the collimated object image, in focus, on
the sensor focal plane. The image diameter in the sensor focal plane is
given by ds = dc f s / f c , where f s is the sensor optics focal length, f c is the
collimator focal length, and dc is the object diameter.
The collimator provides an ideal means to perform laboratory work at
infinite conjugate sensors, but there are practical considerations complicat-
ing such work. These considerations include optical axis alignment, lateral
placement in the beam, and beam vignetting. If the collimator and sensor
optical axes are not parallel, the object image will be displaced from the
center of the sensor image. If the sensor is displaced laterally, part of the
flux in the beam will be lost.
One common use of a collimator is to simulate a source at long range
in the laboratory. Figure 6.12 illustrates the vignetting effect that occurs
Sensors 239

Source Collimator Sensor Sensor

3
Dc Source Source
dc image 4 image
A B
1
2

fc Rv

Figure 6.12 Collimator beam vignetting.

for a large source and small collimator focal lengths. Flux emanating from
the center of the source is collimated to parallel rays (solid lines in Fig-
ure 6.12). Flux emanating from the outer edges of the object is collimated
to parallel rays at an angle to the optical axis (see the two smaller inserts
in Figure 6.12).
From the geometry, it is evident that the only part of the beam pro-
viding flux from the full source area is the conical-shaped zone 1. Zone 4
does not contain any flux from the outer rim of the source, whereas zones 2
and 3 vignette either the bottom or top part of the object area. This sim-
ple geometrical analysis does not take into account the sensor’s FOV and
entrance aperture diameter, both of which place even further restrictions
on the sensor placement within zone 1. If it is important to obtain all of
the flux emanating from the source (i.e., when calibrating a sensor), it is
necessary to locate the whole of the sensor aperture in zone 1. The sensor
located at position A in the beam has no vignetting. The sensor located at
position B in the beam will only receive flux from the central part of the
source.
Using similar triangles, it follows that Rv /Dc = f c /dc , hence Rv =
Dc f c /dc . The depth of nonvignetted zone 1 can be increased by using a
larger collimator (larger Dc and f c ) or a smaller source (smaller dc ). A
smaller source area might not be feasible because the source area is nor-
mally selected to control the total amount of flux from the source.
240 Chapter 6

6.4 Spectral Filters

An optical filter is an element that transmits light selectively. Several phys-


ical processes can be used to effect the filtering action. The two most-
common techniques used are interference filters and absorption filters. In
the case of absorption filters, the bulk of the filter material absorbs energy.
The interference filter consists of many thin film dielectric layers acting as
resonators at optical frequencies. The filter can effectively pass selected
frequencies but reject other frequencies. The class of neutral density filters
provides a constant spectral transmittance over all wavelengths.
Filters are designed to the following generic spectral requirements:
the passband, where the filter should transmit light with the specified min-
imum transmittance, and the stopband, where the filter should suppress
light to the specified suppression. A typical transmittance specification
might be that the filter has a transmittance exceeding 0.7 in the spectral
range 0.7–0.86 µm, with a transmittance of less than 0.001 outside the spec-
tral range 0.65–0.9 µm, over the spectral range of a silicon detector defined
as 0.3–1.2 µm. In the absence of measured filter transmittance data, the
spectral response of a filter can be approximated by Equation (D.4), with
potential shapes shown in Figure D.1(b).
A spectral filter is a physical object, subject to Planck’s law and Equa-
tion (2.2). At wavelengths where the filter transmittance is less than unity,
the filter reflects ambient flux and/or radiates thermally generated flux.
This leads to the phenomenon where the sensor observes the filter as a
source of energy (in the stopbands). In order to reduce the filter radiation,
the filter is sometimes cooled down, even down to cryogenic temperatures.

6.5 A Simple Sensor Model

This section describes a minimum detail model of a sensor in terms of


simple optics, a detector, and a spectral filter. The very simple sensor
model, developed here from first principles, is further developed in later
chapters.
The sensor is regarded as an imaging system, projecting an image
from a distant object onto the focal plane of the lens. The basic (thin
lens) configuration is shown in Figure 6.13. This combination of lens and
detector results in a sensor that is spatially sensitive over a small angle,
called the field angle or the FOV. In other words, the sensor is insensitive to
radiation coming from sources outside its FOV. It is almost the inverse of a
flashlight, with the lamp filament being replaced with a detector. Compare
Sensors 241

NEL
NEM
NETD NEE NEP id v

Extended
target object
Ds
α
A0 Pixel field
of view as Optics
defined by A1
detector
β
τa Filter

τs Detector
a Electronics
Object is larger Zt
R01 > f
than pixel field of
view but only flux
b
within the field of
view contributes
to the signal flux f

Object plane Optics plane Image/detector Electronics


plane plane

Figure 6.13 System parameters used in the simple sensor model.

Figure 6.13 with Figure 6.6, specifically with respect to the role of the optics
in this sensor.
Most sensor systems operate at or near infinite conjugates; for this
model it is assumed that the sensor is focused at infinity s → −∞, and
hence s → F2 . Accurate mathematics require that tan α = a/ f , but for
small angles tan α ≈ α. The respective FOV angles for small angles are then
given by the chief ray in terms of the detector size and optics focal length
by the following simplified equations:
a
α = , (6.6)
f
b
β = , and (6.7)
f
ω = αβ
ab
= . (6.8)
f2

These equations state that a lens with focal length f , in [m], with
242 Chapter 6

a detector with dimensions a and b, in [m], in the focal plane has FOV
angles α and β in [rad]. The sensor pixel FOV has a solid angle FOV ω in
[sr]. For focus at infinite conjugates the optics diameter and clear aperture
area are described in terms of the f -number and detector size by
f
Ds = , and (6.9)
F#
PπDs2
As = (6.10)
4
Pπ f 2
= (6.11)
4F#2
Pπab
= , (6.12)
4ωF#2
where F# is the f -number of the optics, Ds is the optics aperture diameter
in [m], As is the clear aperture area in [m2 ], and the factor P is used to
account for diverse loss effects such as vignetting or central obscuration.

6.6 Sensor Signal Calculations

Extending on the optical sensor model developed in Section 6.5, this sec-
tion develops a radiometric model of a simple source–medium–sensor sys-
tem to calculate the signal at various locations in the system. In terms of
Equation (2.33) the source is modeled as a Planck radiator with a spec-
tral emissivity 0λ . The elemental area dA1 is now the entrance pupil of
the optical system, focusing flux onto a detector. It is common practice to
locate an optical filter with transmittance τsλ , near the pupil A1 in order
to selectively control the flux flowing through the detector. An amount
αsλ is absorbed in the receiver area (in most cases this value αsλ is already
accounted for in the detector responsivity). The detector converts the flux
flowing through A1 to an electronic signal. This signal is amplified and
can be used as a measure of the optical flux through the sensor’s entrance
pupil.

6.6.1 Detector signal

The source and receiver


 areas are often small, relative to the distances
involved. Then, dA1 → A1 . Applying Equations (2.33) and (3.25), the
spectral flux in the entrance pupil of a sensor can be written as
0λ L0λ dA0 cos θ0 A1 cos θ1 τaλ τsλ αsλ
d2 Φ λ = (6.13)
R201
for the flux in a narrow spectral band and small source area. The αsλ
factor is the flux absorptance (emissivity by Kirchhoff’s law) in the sensor
Sensors 243

Source Atmosphere Filter Optics Detector Amplifier


θ0 A1
R01
dA0 F i v
θ1
Zt
T τa dA 1
A0
0 τs αs
Emissivity Absorption

Figure 6.14 Simple radiometric model of a source, medium, and sensor.

as explained in Figure 3.5. It is henceforth assumed that the sensor is


aligned toward the object such that θ1 = 0 so that the cos θ1 becomes unity.
Any real sensor consists of several optical elements, which may include
a window, reflective optical elements (mirrors), refractive optical elements
(lenses), and optical filters. The spectral transmittance values of all of these
filters are assumed to be contained in τsλ .
As shown in Figure 6.14, the flux on the entrance aperture is converted
to an electrical signal in the detector and amplified by the amplifier. As-
sume that a fraction k of the flux falling on the entrance aperture reaches
the detector. The value of k is normally less than unity because of losses
in the optics, vignetting, or other attenuating mechanisms in the sensor.
The detector spectral responsivity may be written as a normalized spectral
responsivity multiplied by a peak responsivity, Rλ = R R
λ (normally with
units of [A/W] or [V/W]). The absorptance variable αsλ in Equation (6.13)
is contained in the detector spectral response R λ . The detector signal is
normally fed into an integrator or preamplifier to capture and amplify the
signal for further processing. Assuming an ideal linear preamplifier and
an electronic bandwidth that would not limit the signal, the preamplifier
output for a monochromatic source at wavelength λ is given by
kR Zt 0λ L0λ dA0 cos θ0 A1 τaλ τsλ dλ
d2 v λ = , (6.14)
R201
where Zt is the preamplifier gain in [V/A]. Define the spectral sensor re-
sponse as
 λ,
Sλ = τsλ R (6.15)

regroup the variables, and integrate over all wavelengths, to obtain the
signal voltage as
 Zt dA0 cos θ0 A1  ∞
kR
dvS = 0λ L0λ τaλ Sλ dλ, (6.16)
R201 0
244 Chapter 6

where vS is the detector voltage [V] for flux in the band defined by S , and
the elemental source area dA0 has been retained. Equation (6.16) can be
integrated over the source area to obtain the full source signature:
   ∞ 
 1
v S = k R Zt A 1 0λ L0λ τaλ Sλ dλ dA0 cos θ0 . (6.17)
A0 R201 0

Note that Equation (6.17) can be calculated in either the radiant (watts)
or photon (photons/second) domains. In the radiant domain, use Equa-
tions (3.1) and (5.5). In the photon domain, use Equations (3.3) and (5.4).
It is convenient to define an irradiance, called the apparent irradiance,
or sensor inband irradiance, which is the irradiance from a given source
to a given sensor observed through a given medium. The apparent sensor
inband irradiance is defined as
Φλ
ES = (6.18)
A1

k dA0 cos θ0 λ2
= 0λ L0λ τaλ Sλ dλ. (6.19)
R201 λ1

From Equation (6.19) it can be seen that the source radiance is spec-
trally weighted with the system response and the medium transmittance,
hence the name apparent irradiance: ‘as observed by the sensor.’ Equa-
tions (6.15), (6.16), and (6.19) form the basis of the system model. These
equations calculate the expected signal for any object at any distance. Be-
cause the spectral integrals contain factors of the source, the atmosphere,
and the sensor, it is necessary to calculate the integral for any new set of
sensor/object data or change in atmosphere or distance.

6.6.2 Source area variations

The previous section assumed a small and uniform source with area dA0 .
By integrating Equation (6.16) across the source area A0 , the total detector
signal due to an arbitrary source can be determined as
  λ 

2 d( A0 cos θ0 )
v S = k R Zt A 1 0λ L0λ τaλ Sλ dλ , (6.20)
source λ1 R201
where the spectral integral is calculated first for each elemental area dA0 ,
and then over the full area A0 . Note that L0λ may be a function of dA0 , i.e.,
the source radiance may vary over the surface of the source.
In the case where the source size is small compared with the distance
between the source and the detector, R01 reduces to the distance between
Sensors 245

the source and the detector. If in addition, the source has uniform radiance
over its total area, and the source solid angle is less than the sensor field of
view, the spatial integral over the source can be separated from the spectral
integral:
 λ2 
 Zt A 1 d( A0 cos θ0 )
vS = k R 0λ L0λ τaλ Sλ dλ . (6.21)
λ1 source R201
The source spatial integral is actually the source solid angle

d( A0 cos θ0 )
Ωs = , (6.22)
source R201
and the detector signal can be written as
 λ2
 Zt A 1
vS = Ω s k R 0λ L0λ τaλ Sλ dλ. (6.23)
λ1

Equation (6.23) can be applied for relatively small, uniform sources


where the source solid angle, as subtended at the sensor, is less than the
sensor FOV. If the sensor FOV is smaller than the source solid angle, the
source solid angle in Equation (6.23) is replaced with the sensor solid an-
gle, as explained in Section 7.6.

6.6.3 Complex sources

The mathematical model described thus far only considers one source el-
ement. A complex source may contain a number of different radiators
with highly different characteristics. One portion may be very hot and an-
other portion may be much cooler. For such a source, the total irradiance,
within a single FOV, can be found by summing the contributions from all
N elements as follows:
  λ2

 Zt A 1 ∑
N
A 0i cos θ 0i
vS = k R 0λi L0λi τaλ Sλ dλ . (6.24)
i=0 R201i λ1

6.7 Signal Noise Reference Planes

In a real, physical system, the noise contributions are typically created


in the detector and the first-stage electronic amplifier. It is sometimes
required to express the noise or signal referred to different locations in the
system. For example, the sensor noise can be expressed as an irradiance
on the entrance pupil of the sensor or even to the source/object plane.
The referred noise or signal levels have the same effect as in the detector
246 Chapter 6

but are appropriately scaled. The different locations are called ‘planes’
following the practice in optical design. As shown in Figure 6.13 there are
four planes of importance: the object plane at the source object, the entrance
pupil or optics plane at the entrance pupil of the optics, the image plane or
detector plane, and the electronics plane.
The mechanism to convert noise is based on the standard radiometry
methodology. The same transformations that are used for optical flux are
used exactly the same way for noise transformations. In particular, note
the following relationships:
LA0 cos θ0 Mω
E= = Lω = , (6.25)
R 2 π
which are extended to the equivalent noise representations
NEL A0 cos θ0 NEM ω
NEE = = NEL ω = . (6.26)
R 2 π

Note that in the above equation, the projected solid angle should be
used because the thermal camera senses Lambertian sources. However, for
the small pixel fields of view generally used, the projected and geometrical
solid angles are numerically equal.
The conventions followed in this book are as follows (the same con-
ventions apply to both noise and signal):

1. Spectral noise equivalent power (NEPλ ) is the optical signal power spec-
tral density [W/µm] required on the detector to yield a signal equal to
the noise in the sensor. NEPλ is always measured in the detector plane
and is expressed as flux in the image plane. See also Section 7.1.3.

2. Spectral noise equivalent irradiance (NEEλ ) is the optical signal irradi-


ance required in the entrance pupil to yield a signal equal to the noise in
the sensor. NEEλ is always measured in the entrance pupil of the sen-
sor (but excluding the sensor filter) and is expressed in [W/(m2 ·µm)]
incident on the sensor. NEEλ can be conceptually defined as

NEPλ Δ f Ad
NEE λ = = ∗ , (6.27)
A1 τs Dλ A1 τs
where A1 is the sensor entrance pupil area, Δ f is the noise equivalent
bandwidth, Ad is the area of the detector, Dλ∗ is the spectral specific
detectivity, and τs is the sensor system response (filter transmittance
and other losses between the entrance pupil and detector). If τs = 1, it
means that all of the power incident on the entrance pupil reaches the
detector. See also Section 7.1.3.
Sensors 247

3. Spectral noise equivalent radiance (NELλ ) is the radiance required at


the source in the object plane to yield a signal equal to the noise in
the sensor. Units are [W/(m2 ·sr·µm)]. NELλ is always measured in the
object plane.
NELλ is defined only for extended targets (filling the complete FOV)
and can be derived from the NEP by using Equation (2.31):

NEP λ R201
NEL λ = , (6.28)
A0 A1 τs τa
where A0 is the pixel footprint in the object space, R01 is the distance
between the sensor optics and the object plane, and τa is the atmospheric
transmittance. See also Section 7.1.3.

4. Spectral noise equivalent exitance (NEMλ ) is the exitance required at


the source to yield a signal equal to the noise in the sensor. NEMλ
has units [W/(m2 ·µm)] and is always measured in the object plane.
The NEMλ can be derived from the NELλ by NEM λ = πNEL λ [by
Equation (2.7)]. See also Section 7.1.3.

5. Spectral noise equivalent temperature difference (NETDλ ) is the change


in source temperature [K] required to yield a signal equal to the noise
in the sensor. NETDλ is always measured in the object plane and is
expressed in temperature difference at the object. NETDλ is derived
from the NELλ by noting that for small temperature variations
NEL dL
= , (6.29)
NETD dT
and hence
NEP λ R201
NETD λ = , (6.30)
A0 A1 τs τa dT
dL

where NETD λ has units of [K], and dL/dT is Planck’s law temperature
derivative [see Equation (3.4)]. See also Sections 7.1.3 and 9.6.

Note that the sensor system response τs , atmospheric transmittance τa


and NEP used above are spectrally varying variables. The above equations
are therefore only valid at a particular wavelength, not over a wide wave-
length band. Wideband spectral quantities are discussed in more detail in
Section 2.6.5.
248 Chapter 6

Image plane area


Object plane area Ao could be tilted
could be tilted
Mar
gina θd
θs l ray Ad
As
θ

Wso Wdo
As Wos Optical axis
Ad
Wod
Chief ray
Object
plane Image
plane

Entrance
pupil

Figure 6.15 Radiance in imaging systems.

6.8 Sensor Optical Throughput

Equation (2.31) forms the basis of all radiometric calculations; the flux
flowing from a source to a receiving area can be written as
L dA0 cos θ0 dA1 cos θ1
d2 Φ = ,
R201
= L dΩ0 dA1 cos θ1 ,
= L dA0 cos θ0 dΩ1 , (6.31)

which implies that the total flux is proportional to the (spatially invariant)
radiance times the product of the source area and the receiver solid angle,
or vice versa. It follows that the source could have been on either side; the
total flux transfer depends on the geometrical relationship between the two areas,
irrespective of which is the source.
For the simplified lossless imaging system shown in Figure 6.15, the
argument above can be extended further. It can be shown that the total
flux flowing through the source and detector is given by
d2 Φ = L dΩso dAo cos θo ,
= L dΩos dAs cos θs ,
= L dΩod dAd cos θd , and
= L dΩdo dAo cos θo .

In all of the above equations it is evident that the total flux flowing
through the system is proportional to the product of a solid angle and the
Sensors 249

appropriate projected area. This product of solid angle and area is called
the throughput of the system. System basic throughput, or étendue, is an
indication of the total flux that can pass through the system. It depends
on the FOV and the aperture of the system. It is defined by
T = n2 A Ω, (6.32)

where n is the refractive index at the location where the solid angle Ω is
defined, and A is the area of the aperture at the location where the solid
angle is defined. In terms of the optical system of Figure 6.15, the étendue
is given by n2o Ao Ωs . The units of throughput are [sr·m2 ].
In its most-basic form, the power flowing through the source and re-
ceiver areas is given by the product of the basic radiance and the system
throughput,
 
L
Φ = n2 AΩ,
n2
LA0 A1
= .
d2

Optical designers define a quantity, called the Lagrange invariant, that


is invariant in any given optical system. The Lagrange invariant is propor-
tional to the square root of the throughput.
The concept of throughput underlines a very important principle: for
a fixed optics f -number and a fixed image height (e.g., detector size), the
product of the look angle (solid angle) and receiving aperture (area), and
hence the flux through the system, is constant. Under this condition, any
increase in FOV requires a decrease in the optics aperture and vice versa:
 2 
2 πf ab n2 πab
T = n Aoptics Ωfov = n
2
= = constant. (6.33)
4F#2 f2 4F#2

Consider two lenses for use with a staring array detector with a di-
agonal size of 13 mm. The two diagonal field angles required are 5 deg
and 25 deg. Assume that both lenses have apertures of f/1.8. First-order
calculations yield the lens specifications shown in Table 6.1. It is clear that
the wide-angle lens has a smaller diameter, and the narrow angle lens has
a much bigger diameter. This ‘unfortunate’ fact is due to the limitation im-
posed by the throughput of the system (fixed image height and f -number).
When a sensor with given throughput (fixed detector size and f -num-
ber) views an extended target object, the power on the detector stays the
same, irrespective of the FOV. Hence, the sensor gain calibration is identi-
cal for all fields of view (for the given f -number).
250 Chapter 6

Table 6.1 Example lens design demonstrating optical throughput.

Field Focal Aperture


angle length diameter
5 149 82
25 29.8 16
deg mm mm

The throughput limitation is only imposed on optical systems that at-


tempt to obtain some measure of image quality. If the optical system is not
designed for reasonable image quality, as in typical energy-concentrating
optics, this throughput limitation does not apply. Energy-concentrating
systems can therefore have large apertures and large fields of view.

Bibliography
[1] Shannon, R. E., The Art and Science of Optical Design , Cambridge Uni-
versity Press, Cambridge, UK (1997).
[2] Fischer, R. E., Tadic-Galeb, B., and Yoder, P. R., Optical System Design ,
McGraw-Hill, New York (2008) [doi: 10.1036/0071472487].
[3] Smith, W. J., Modern Lens Design , 2nd Ed., McGraw-Hill Professional
(2004).
[4] Smith, W., Modern Optical Engineering , 4th Ed., SPIE Press, Bellingham,
WA (2007).
[5] Kingslake, R. and Johnson, R. B., Lens Design Fundamentals, 2nd Ed.,
SPIE Press, Bellingham, WA (2010) [doi: 10.1016/B978-0-12-374301-
5.00003-6].
[6] Walker, B., Optical Engineering Fundamentals, 2nd Ed., SPIE Press,
Bellingham, WA (2009).
[7] Schmidt, J. D., Numerical Simulation of Optical Wave Propagation: With
examples in MATLAB , SPIE Press, Bellingham, WA (2010).
[8] Hecht, E., Optics, 4th Ed., Addison Wesley, Boston, MA (2002).

Problems

6.1 An optical system with an f -number of f/5 employs a single-


element detector with size 1 × 1 mm2 . The focal length is 250 mm.
Sensors 251

6.1.1 Determine the instantaneous solid FOV of the sensor (in radians).
[2]
6.1.2 Determine the geometric solid angle subtended by the lens as seen
from the detector. [2]
6.1.3 Determine the projected solid angle that the barrel (the walls of
the tube holding the lens and detector) of the sensor subtends as
seen from the detector. [2]
6.1.4 Determine the throughput of the sensor. [2]

6.2 Derive a mathematical description of the detector current for a


simple system comprising a small detector, an optical filter, an at-
mospheric medium, and a small thermal source. The detector and
source are separated by more than 100 times the diameter of the
detector. This system has no other optics, collimators, or com-
ponents. Start with the simple graphic for flux transfer between
two small surfaces. Apply the Golden Rules when developing the
answer. [6]
6.3 Use Equations (6.1), (6.2), and (6.3) to calculate the image height,
the magnification, and distance from the back principal plane (s )
for a lens with focal length 100 mm, an object of size 10 mm, at
distances of 200, 400, 1000, and 10000 mm. Show the results in
tabular form. [4]
6.4 Use a 1:1 scale geometrical diagram to plot the chief ray and
marginal ray to calculate the image location and image height for
a lens focal length of 50 mm, an object height of 10 mm, with the
object located at −100, −200, and −∞ mm. The object located at
infinity does not have to be to scale because it would require an
infinite-sized paper.
Confirm the results obtained with the geometrical plot by using
the equations above to calculate the image height and image loca-
tion. [3]
6.5 A sensor has a lens with focal length of 200 mm, f -number of 3,
and a detector size of 10 × 10 mm2 . Draw a diagram of the sen-
sor to scale, showing the chief ray and marginal ray of the system.
Calculate the numerical aperture, the FOV solid angle of the sen-
sor, and the light-collecting solid angle (defined by the numerical
aperture) at the detector. Confirm the calculation by comparing
the results with the angles measured or calculated in the scale
drawing. [5]
6.6 A simple sensor has four pixels, but each of the pixels has a
slightly different responsivity, as shown below. This means that
252 Chapter 6

if the sensor views a uniform background, the pixels in the im-


age have different values - this is called detector nonuniformity.
The objective with this task is to design a means to perform non-
uniformity correction (NUC). The spectral responsivity is shown
here for interest only, it is not required in the execution of the task.
InSb detector responsivity: four elements
2.0
1.8
1.6
Responsivity [A/W]

1.4
1.2
1.0
0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 6
Wavelength [mm]

Each detector element size is 1 mm × 1 mm. The detector is lo-


cated 200 mm from a blackbody. The blackbody aperture (size) is
1 × 10−6 m2 . Two detector current measurements are performed
at different blackbody temperatures. The 0–6-µm inband powers
used in the measurements are as follows:
Temperature 700 K 900 K 1000 K
inband power Φ 0.0560 µW 0.2016 µW 0.3331 µW
The measured currents in the four detector elements are:
i700 i900 i1000
Temperature 700 K 900 K 1000 K
No. D1 0.0642 0.1954 0.2991
No. D2 0.0655 0.2324 0.3763
No. D3 0.0571 0.1903 0.3012
No. D4 0.0636 0.2072 0.3253
- µA µA µA

6.6.1 Plot the detector current versus input power for the four elements.
Observe that there are four different response lines. [4]
6.6.2 A vertical line on the graph represents the optical power from a
uniform source. Predict what the detector current values will be
for an optical power of 0.25 µW. [2]
6.6.3 Comment on how the signals from the four elements can be com-
pensated (by external electronics) such that the sensor output sig-
Sensors 253

nal will be the same for all four detectors, irrespective of the input
power. [2]
6.6.4 Using a block diagram only, do a conceptual design of a circuit
that will perform a nonuniformity correction (NUC) for each of
the four elements in this sensor. [4]

6.7 A lens with focal length 120 mm and diameter of 50 mm is located


in front of a square detector, with the detector in the focal plane
of the lens. The lens barrel totally encloses the rear of the sensor
(from the lens, backwards, all around the detector). The square
InSb detector has an area of 0.1 cm2 . The sensor is placed in the
center of a cubic oven with wall dimensions of 3 m, pointing to the
center of one of the walls. The oven walls are heated to 1300 K.
The sensor’s InSb detector responsivity can be modeled by Equa-
tion (D.5) with (λc = 6 µm, k = 20, a = 3.5 and n = 4.3).
InSb detector responsivity
1.6
1.4
Responsivity [A/W]

1.2
1
0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 6
Wavelength [mm]

6.7.1 Determine the instantaneous solid FOV of the detector through


the lens, the projected solid angle of the lens from the detector’s
position, and the projected solid angle of the lens barrel from the
detector’s position. [3]
6.7.2 Calculate the current through the detector, assuming that the lens
barrel temperature is maintained at 300 K. [2]

6.8 A square silicon detector is pointed toward the sun (normal vector
is directed to the sun). The detector has an area of 1 cm2 . You may
assume unity atmospheric transmittance.
The sensor silicon detector responsivity can be modeled by Equa-
tion (D.5) with (λc = 1.15 µm, k = 8, a = 3.5 and n = 4.3).
The sensor filter transmittance can be modeled by Equation (D.4)
with (τs = 0, τp = 1, λc = 0.8, Δλ = 0.21 and s = 100).
254 Chapter 6

Detector responsivity and filter curves


1.0
Filter

Spectral response
Responsivity
[A/W]

0
0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2

Wavelength [mm]

6.8.1 Write a mathematical formulation of the problem; include flux


transfer, detector response, etc. Describe all elements in the model
and provide the relevant numerical values for all parameters. [5]
6.8.2 Implement the model in a computer program. Describe the struc-
ture of the model and provide all numeric values (spectral and
scalar). Use a wavelength increment of 0.01 µm. [5]
6.8.3 Calculate the current through the detector when it is viewing the
sun, if no optical filter is present. [2]
6.8.4 Locate a lens with focal length of 100 mm and diameter of 35 mm
in front of the detector, such that the detector is located in the focal
plane of the lens (still no filter). Calculate the detector current for
this condition, with the sensor facing the sun. [3]
6.8.5 Calculate the current through the detector plus lens when it is
viewing the sun, with the optical filter present. [2]
6.8.6 The detector (without the lens and with no filter) is placed in an
oven. The oven’s inside dimensions are 0.5 m along all sides. Cal-
culate the detector current for oven temperatures of 1300 K and
2800 K. Assume an oven wall emissivity of unity. [4]
6.8.7 The detector with the lens (but still no filter) is placed in the oven.
Determine the detector currents for the two temperatures. Com-
pare the detector current magnitudes with/without the lens. De-
scribe the differences. [2]
6.8.8 Repeat the above four problems (facing the sun, with/without
lens, facing the oven wall, with/without lens), but this time in-
clude the filter shown above in the same graph as the detector
responsivity. [2]
6.8.9 Consider all of the results from the previous calculations. What
conclusions can be drawn from these results? This is not a trick
question, just think creatively and list all your observations. [4]
Evaluate your calculation method; was it suitable for these prob-
lems? [1]
Chapter 7
Radiometry Techniques

When you can measure what you are speaking about,


and express it in numbers, you know something about it;
but when you cannot measure it, when you cannot express it in numbers,
your knowledge is of a meagre and unsatisfactory kind.
William Thomson, Lord Kelvin

7.1 Performance Measures

Performance measures provide a quantified evaluation of system perfor-


mance. These measures often indicate aggregate performance, covering
several lower-level parameters in a single measure. Depending on the sen-
sor application, one or more of these expressions provide an optimum
sensor performance figure. For example, the sensitivity of an optical com-
munication receiver is described in terms of the SNR ratio at the sensor,
whereas the performance of a thermal camera is expressed in terms of the
detectable temperature difference. The motivation for using performance
measures is given in Section 1.2.8. Examples of the application of perfor-
mance measures are given in Chapter 9.

7.1.1 Role of performance measures

Performance measures or ‘figures of merit’ are used to develop an under-


standing of a system’s performance. These are critical tools to be used
during system design and for technical performance measurement during
development phases. The real value of performance measures lies not so much
in the absolute value of the measure but in relative values when comparing two
situations. Performance measurement may be used as follows:

1. During the design phase, choices can be made by relative comparison


of the predicted performance for different design options. The option
with the highest predicted performance may then be further developed.

255
256 Chapter 7

2. Performance measures can be used to optimize designs by trading off


performance against design complexity.

3. During later development phases, performance measures can be used to


monitor progress in technical performance against the planned progress.

4. Performance measures may be used to determine the sensitivity of a


design to variations in component characteristics.

5. Performance measures may be used to determine the safety margins in


a design.

7.1.2 General definitions

In the realist view, the time domain is a dimension with irreversible and
sequential flow or movement from past, through present, into future. Time
is described in a single real number relative to some fiducial epoch (start-
ing event). When modeling time in mathematical terminology, time may
appear to be reversible.
In the Cartesian view, the space domain is a three-dimensional ex-
tent in which objects have freedom of movement and direction. Space is
commonly modeled in a coordinate system with the convention of using
( x, y, z) to denote the three Cartesian dimensions. The origin of this coor-
dinate system is uniquely defined in different problem spaces.
Foreground focus attaches positive importance or priority to an event
or signal, in the context of its application. Foreground is the primary focus
of an investigation.
Background focus attaches a neutral or negative priority to an event
or signal, in the context of its application. Background interferes with the
analysis of foreground. Background is everything that is not foreground.
Signal is a foreground continuous time or discrete function in time or
space domains that conveys information.
Noise is a background continuous time or discrete function in time
or space that interferes with the foreground function or event. Noise has
a connection with unfolding time but can also be expressed in the space
domain. There is a strong connotation that noise has mostly random be-
havior, but randomness is not a mathematical requirement (e.g., electro-
magnetic interference is regarded as a noise but is not random).
Clutter is a special type of noise, often with a connection to the geom-
etry of the real world. Clutter is normally used in the context of image
processing or radar processing, being caused by physical objects in the
Radiometry Techniques 257

problem spatial extent. Clutter is mostly structured noise, with little or


no randomness. The (spatial/geometric) structure in clutter is what sets it
apart from noise.
Root-mean-square (rms) is often used as a measure of noise. The rms
of a function (normally a time signal) f (t), with duration T, is defined as


1 T
f rms = lim [ f (t)]2 dt. (7.1)
T →∞ T 0

The peak value is often used as an indication of instantaneous signal


strength. A signal local peak or maximum is the value f ( x∗ ) over the range
, where f ( x∗ ) ≥ f ( x) when x − x∗ < .

7.1.3 Commonly used performance measures

The measures listed below are widely used to quantify the performance
of linear (or near-linear) systems. The list is not complete because many
designs have their own unique performance measures. Designers must
identify the relevant measures for their particular designs. Performance
prediction and verification of nonlinear systems are not easily performed
by simple measures such as these; nonlinear system performance is gen-
erally determined by using scenario-based simulation models or the real
hardware item.

Signal-to-noise ratio (SNR): SNR is the ratio of the peak signal value to
the rms value of the noise. Higher SNR values imply better system
performance. SNR is unitless.
This measure depends on characteristics of the source, medium, and
sensor. The noise includes sensor noise or photon noise in the source.
It is normally easy to determine the SNR, both by measurement and
calculation. The SNR is improved by optimizing the bandwidth, in-
creasing source intensity, or decreasing sensor noise.
Signal-to-clutter ratio (SCR): SCR is the ratio of the peak signal value
to the peak clutter value. Higher SCRs imply better system perfor-
mance. SCR is unitless.
This measure depends on the characteristics of the surroundings of
the foreground target object. The clutter could be terrain objects re-
flecting ambient light or infrared radiation from hot terrain objects
such as rocks. The medium and sensor have only limited effect on
the SCR. The SCR is often measured because it is difficult to calcu-
late in analytical form. The SCR is improved by increasing source
intensity or optimizing the sensor FOV.
258 Chapter 7

Noise equivalent power (NEP): NEP is the optical signal power required
to create an electronic signal such that the SNR is one. Lower values
of NEP imply a better performance. NEP has units of [W].
NEP requires the quantification and exact definition of noise and
signal. NEP considers the total wideband noise (not power spectral
density), and hence the bandwidth must be specified. Likewise, the
properties of the source and medium determine the signal and must
be specified.
NEP can be defined for a complete sensor or just a detector on its
own. NEP is mostly calculated because it is difficult to measure di-
rectly. The NEP is improved by decreasing the various noise sources.
See also Section 6.7.

Detectivity (D): Detectivity is the reciprocal of NEP. Detectivity is the


SNR for a 1-watt input signal (normally a very big number!). Detec-
tivity has units of [W−1 ].

Specific detectivity, ‘Dee-star’ ( D ∗ ): D ∗ is the noise in a detector scaled


with noise bandwidth and detector area. The D ∗ describes the detec-
tor performance in fundamental or absolute terms. It is commonly

√ in product data sheets. D is normally specified in units of
quoted
[cm· Hz/W].
Wideband, as well as spectral, D ∗ and NEP are related by

Δ f Ad
NEP = , (7.2)
D∗
where Δ f is the noise equivalent bandwidth, and Ad is the detector
area.

Noise equivalent irradiance (NEE): NEE is the optical irradiance required


to create an electronic signal such that the SNR is one. Lower values
of NEE imply a better performance. NEE is an expression of sensor
noise but is affected by the source properties and the optical medium
(atmosphere). NEE has units of [W/m2 ].
The NEE is the NEP (see above) scaled by the effective sensor aper-
ture area and filter/optics transmittance. See also Section 6.7.

Noise equivalent radiance (NEL): NEL is the radiance (from an extended


source) required to create an electronic signal such that the SNR is
one. Lower values of NEL imply a better performance. NEL is an ex-
pression of sensor noise but is affected by the source properties and
the optical medium (atmosphere). NEL has units of [W/(m2 ·sr)]. See
also Section 6.7.
Radiometry Techniques 259

Noise equivalent exitance (NEM): NEM is the exitance (from an ex-


tended source) required to create an electronic signal such that the
SNR is one. Lower values of NEM imply a better performance.
NEM is an expression of sensor noise but is affected by the source
properties and the optical medium (atmosphere). NEM has units of
[W/m2 ]. See also Section 6.7.

Noise equivalent temperature difference (NETD or NEΔT): NETD is the


temperature difference (from an extended source) required to create
a SNR of one. A lower NETD implies better performance. NETD
is an expression of sensor noise but is affected by the source prop-
erties and the optical medium (atmosphere). NETD is derived from
instrument measurements; the human observer is not part of this
measurement. It is usually measured at a background temperature
of 300 K. NETD has units of [K]. See also Sections 6.7 and 9.5.4.

Minimum resolvable temperature (MRT): MRT is the smallest tempera-


ture difference between two (extended source) blackbodies (reference
blackbody at 300 K), arranged in a standard bar-chart test pattern
with 4 line pairs, that can be observed by humans. MRT has units of
[K]. MRT is a performance figure that includes the human observer
as part of the measurement.
For design purposes the human observer is sometimes modeled with
a set of equations and parameters. 1–4 There is not always agreement
on these models, and care must be taken when comparing MRT val-
ues quoted by different sources.

Minimum detectable temperature (MDT): MDT is the smallest temper-


ature difference between two blackbodies, arranged as a rectangular
plate against a background at 300 K, that can be observed by humans.
MDT has units of [K]. The same comments as for MRT apply.

Noise equivalent reflectance (NER or NEΔρ): NER is the change in re-


flectance required to create an electronic signal such that the SNR
is one. Lower values of NER imply a better performance. NER is
unitless. The spectral band and the source spectral radiance used to
illuminate the surface must be specified.

Probability of detection (Pd ): For a signal corrupted by noise, Pd is the


probability that the signal will exceed a (fixed or variable) threshold.
A higher probability of detection implies better performance. Pd is
unitless, with 0 ≤ Pd ≤ 1. Probability of detection is a three-fold
function of signal magnitude, noise/clutter magnitude, and thresh-
old.
260 Chapter 7

It is relatively difficult to measure Pd as it approaches zero or unity


because the statistical events become relatively rare. Pd is improved
by lowering the threshold relative to the noise and clutter magni-
tudes, but a lower threshold results in increased false alarm rate.
Simplified noise calculations can be done, but accurate calculation of
Pd is quite difficult. 2,5 See also Section 7.8.

Probability of false detection (Pn ): For a signal corrupted by noise, Pn is


the probability that the noise will exceed a (fixed or variable) thresh-
old when no signal is present. A lower probability of false detection
implies better performance. Pn is unitless. Note that 0 ≤ Pn ≤ 1 and
is usually small in a well-designed system.
The probability of false detection is a function of the noise level and
threshold setting. Pn is improved by increasing the threshold relative
to the noise and clutter magnitudes. See also Section 7.8.

False alarm rate (FAR): For a signal corrupted by noise, the false alarm
rate is the rate by which the threshold is exceeded in the absence of
a signal. FAR is unitless. The false alarm rate is related to Pn by
NPn
FAR = , (7.3)
td
where N is the number of detectors, and td the time on target.

Point spread function (PSF): PSF is the impulse response of an optical


system: the flux distribution in the image plane of a point source
object. 6 The PSF results from lens aberrations and diffraction. It is
therefore a property of the optics, but some investigations could in-
clude medium effects such as turbulence. Smaller PSFs imply better
performance. In imaging systems the PSF is an indication of the
sharpness of the image, and in nonimaging systems the PSF is an
indication of the amount of power that will fall onto a detector. The
PSF can be accurately calculated by modern optical-design computer
programs. See also Section 6.3.7.

Optical transfer function (OTF): The OTF is the Fourier transform of


the optical PSF. It is a complex, two-dimensional function. Bigger
volumes under the OFT generally imply better performance.

Modulation transfer function (MTF): The MTF is the absolute value of


the Fourier transform of the optical PSF. It is a real, two-dimensional
function. Bigger volumes under the MTF generally imply better per-
formance. The MTF can be considered as the ‘spatial frequency re-
sponse’ of the lens, i.e., the lens’ ability to convey the high-frequency
Radiometry Techniques 261

spatial information in an image. MTF is very commonly used by op-


tical and electro-optical system designers. Care should be exercised
when interpreting only the tangential and sagittal sections through
the MTF of a lens because incorrect conclusions can easily be drawn.

7.2 Normalization

Normalization 7,8 is the process whereby a function (e.g., spectrally varying


parameter) is reduced to a set of simple numbers, such as ‘effective band-
width’ or ‘average responsivity.’ Normalization removes information from the
initial data set; it should only be used if the user understands the process
whereby the normalization was achieved.

7.2.1 Solid angle spatial normalization

Spatial normalization has already been encountered when considering spat-


ial solid angles and view factors in Sections 2.5 and 2.8. In effect, the ar-
bitrarily complex spatial distribution of a surface is reduced to a simple
number.

7.2.2 Effective value normalization

The effective value of a variable is given by


∞
F G dλ
Feff = 0 ∞ , (7.4)
0 G dλ

where F is the variable in question, and G is a weighting function. Note


that the effective value of F depends on the shapes of both F and G ; the
effective value of F thus calculated therefore applies only to the specific
weighting function G .
Effective value normalization is discussed in Section 4.4, in the calcu-
lation of effective transmittance. The sensitivity of effective transmittance
to CO2 spectral transmittance was demonstrated.
Effective value normalization also appears in the calculation of detec-
tor wideband responsivity. The detector with area A1 is illuminated with
unfiltered thermal body radiation with area A0 at a temperature Tc , usu-
ally 500 K or 1000 K. The wideband responsivity is now defined as the
ratio of the signal current [Equation (6.16), with Zt = 1 and k = 1] to the
262 Chapter 7

total irradiance onto the detector:


A0 A1 ∞

is R201 0 0λ L0λ ( Tc ) τaλ Rλ dλ
RI = = A A ∞
Φs
0 0λ L0λ ( Tc ) τaλ dλ
0 1
R201
∞
0 0λ L0λ ( Tc ) τaλ Rλ dλ
= ∞ . (7.5)
0
0λ L0λ ( Tc ) τaλ dλ

The effective responsivity depends on the spectral shapes of both the


detector and the apparent source spectral radiance (as observed through
the atmosphere). The responsivity value thus calculated applies only to
the source at the particular temperature used in the calculation.
To illustrate the sensitivity of Equation (7.5) to the source temperature,
a simple calculation was performed. A spectral detector responsivity Rλ
was calculated using Equation (D.5) with k = 1, a = 1, n = 15, and
λc = 6.1. Effective responsivity values were calculated for thermal radiator
sources at 900 K and 1000 K. The effective responsivity at 900 K is 0.56,
whereas the effective responsivity at 1000 K is 0.54. The two effective
responsivity values differ by 4% for a temperature difference of only 10%.
Suppose now that this detector is used to detect a source of 500 K; what
good is any of these responsivity values?

7.2.3 Peak normalization

Peak normalization calculates


 ∞
1
Δx = f ( x)dx, (7.6)
max( f ) 0

where f ( x) is a function, and max( f ) is the maximum value of the func-


tion. One example of peak normalization is the noise equivalent band-
width of an electronic filter (see Section 5.3.13). Figure 7.1 shows the filter
gain and the ratio between the noise bandwidth and the −3 dB electronic
bandwidth for a class of filters known as Butterworth filters.
The noise equivalent bandwidth of an ideal switched integrator is given
by
 ∞ 
sin πT f 2
Δf = df (7.7)
0 πT f
1
= , (7.8)
2T
where T is the integration time of the integrator. The −3 dB bandwidth of
the integrator is f −3 dB = 1/(2.273 T ). The ratio between the noise equiva-
lent bandwidth and the −3 dB bandwidth is 1.156.
Radiometry Techniques 263

1
1 pole
2 pole
3 pole
Gain

4 pole
0.5 # Poles Df / f-3dB
1 1.57
2 1.11
3 1.05
4 1.025
0
0 0.5 1 1.5 2 2.5 3
Normalized frequency

Figure 7.1 Butterworth filter noise equivalent bandwidth for different filter orders.

7.2.4 Weighted mapping

The weighted value of a variable is given by


 ∞
FG = F G dλ, (7.9)
0

where F is the variable in question, and G is a weighting function. Note


that the weighted value of F depends on the shapes of both F and G . The
result is a scalar expression of F after it is mapped into a new space by G .
Essentially, the intent with weighted mapping is to express the relevancy
of one function in terms of the space defined by another function. If there
is little overlap between the vectors, the result will be small.
Section 2.10.4 introduces weighted mapping of spectral radiance to
color coordinates. This particular weighting accounts for the spectral re-
sponse of a sensor — in this case, the human eye. It is evident that
weighted mapping provides a technique to weigh or ‘measure’ a spectral
variable for relevancy for a particular sensor’s spectral response.
Equation (6.19) defined the ‘inband’ irradiance as the spectral irradi-
ance weighted by the sensor’s spectral response
 λ2
k dA0 cos θ0
ES = (0λ L0λ τaλ )(Sλ )dλ,
R201 λ1

where 0λ L0λ τaλ is the source radiance after transmission through the at-
mosphere, and S is the weighting function. In this case, the apparent
source radiance is re-mapped to the sensor’s system response S to repre-
sent the radiance as observable by the sensor.
264 Chapter 7

7.3 Spectral Mismatch

Spectral calculations are performed with variants of Equations (2.33) and


(6.16), repeated here for convenience:
 Zt dA0 cos θ0 A1  λ2
kR
vS = 0λ L0λ τaλ Sλ dλ, (7.10)
R201 λ1

where the spectral system response is given by Sλ . In practice, the system


response is defined by filters, optical elements, and the detector. The man-
ufactured devices have statistical spread in transmittance curves. Some
components may extend a little toward the shorter wavelengths, whereas
other components extend a little toward the longer wavelengths — even
though all components are within specification. Consider now an accep-
tance test setup that is used to evaluate a batch of sensors. For a given
source signal, the sensor must provide a certain minimum signal level to
pass the test.
If the sensors are evaluated with a source heavily weighted toward,
say, the longer wavelengths in the sensor spectral band, the effect of filter
variations in the longer wavelengths are accentuated, whereas the effect of
variations in the shorter wavelengths are less accentuated. If the sensor is
to be used operationally with a different source spectrum, the acceptance
test could be considered invalid. The test would be invalid because the
test source spectrum over the full sensor spectral band does not represent
the actual operational target spectrum. This scenario could occur for very
sensitive sensors, requiring very low signal levels. The requirement for
such evaluations is that a low signal radiance is required but also at the
correct spectral shape over the full sensor band. 9,10
For example, assume a sensor that must be evaluated in the 3–5-µm
spectral band at an irradiance level of 0.1 µW/m2 with a source spectrum
typical of a 700-K blackbody. If the test equipment employs industry-
standard sources and collimators, the achievable irradiance levels are some
orders of magnitude too high. One way out of this situation is to lower the
source temperature to such a value (say to 400 K) that the required irra-
diance is achieved. At this low temperature, the source spectrum is very
different from the required 700-K source spectrum; it would be strongly
weighted toward the longer wavelengths. Evaluation tests on this test
setup will not be representative of the sensor’s response to a 700-K source.
Radiometry Techniques 265

7.4 Spectral Convolution

Section 6.6 describes how to calculate the signal that a given sensor with
response S would receive from a source L, through some medium with
transmittance τ01λ . Suppose that this sensor has a spectral filter τ f with a
narrow (but nonzero) spectral width. This filter is very narrow compared
with its central wavelength, say Δλ = 0.01λc . Such radiometers are used
to determine the spectral radiance of sources or to measure the spectral
transmittance of the atmosphere. The apparent irradiance measured by
such a system can be written [from Equation (6.13)]
 ∞
Eλ c = k 0λ L0λ τaλ τ f λ Sλ dλ, (7.11)
0

where k accounts for the geometrical factors such as source area, orienta-
tion, and distance. This equation can be written as
 λc + Δλ
2
Eλ = k 0λ L0λ τaλ τ f λ Sλ dλ. (7.12)
λc − Δλ
2

By change of variable λ = λc − x,
 + Δλ
2
Eλ c = k 0x L0x τax τ f (λc − x ) Sx dx. (7.13)
− Δλ
2

These equations show very clearly that the irradiance measured with the
filter centered around wavelength λc includes source energy from λc − Δλ 2
to λc + Δλ2 . Apart from the spectral selection, the filter has an additional
effect by smoothing the spectrum being observed because the filter has a
nonzero spectral width.
Equation (7.13) is called a convolution integral because it describes
the convolution between the product (0λc L0λc τaλc Sλc ) and τ f . In linear
systems terminology, the observed spectral source radiance is being con-
volved with the filter spectral transmittance. To investigate the effects of
this convolution consider the two cases: (1) the observed source has lit-
tle variation over the filter passband, and (2) the observed source varies
significantly over the filter passband.
If the product (0λc L0λc τaλc Sλc ) is more or less constant over the filter
passband Δλ, Equation (7.13) can be written to show that the convolution
has little effect other than some insignificant amount of smoothing:
 + Δλ
2
Eλ c = k (0λc L0λc τaλc Sλc ) τ f (λc − x ) dx (7.14)
− Δλ
2
≈ k 0λc L0λc τaλc Sλc τ f Δx. (7.15)
266 Chapter 7

1.0
Transmittance

0.5

0
0.85 0.9 0.95 1 1.05 1.1 1.15
Wavelength [mm]

Figure 7.2 Atmospheric transmittance convolved with 10 cm−1 and 300 cm−1 square win-
dows.

If the product (λ Lλ τaλ Sλ ) varies significantly over the filter passband
Δλ, Equation (7.13) cannot be simplified. In this case, the convolution
attenuates and smears out the finer detail in the spectral information. If
the actual spectral line is very narrow the measured line will approximate
the filter resolution and will be totally erroneous unless the filter spectral
smear effect is compensated by deconvolution.
The effect is best illustrated in Figure 7.2. Modtran™ was used to
calculate the spectral transmittance in the 0.85–1.1-µm spectral range. The
calculation was performed for a 1-km path length at sea level in a Tropical
atmosphere at 27 ◦ C, 75 %RH, 1015 mB, 23-km visibility. The transmittance
data so obtained was convolved with a square filter response. The spectral
filter widths are 10 cm−1 and 300 cm−1 .
As an example of one effect of the finite filter width, consider the case
where a spectral source is observed with a spectral radiometer. The es-
timated source spectrum (a) and measured spectral curve (b) are shown
in Figure 7.3. Suppose Modtran™ was used to calculate the spectral
transmittance (c) of the atmosphere. In order to compensate for the at-
mospheric attenuation, the measured irradiance is divided by the trans-
mittance curve, resulting in the calculated spectrum (d). The sequence on
the left in Figure 7.3 shows what happens if the atmospheric transmittance
was calculated at a much-higher resolution than the spectral measurement
filter width. In the sequence on the right in Figure 7.3, the atmospheric
transmittance (c) was obtained by convolving the high-resolution Mod-
tran™ data with the filter response before dividing the measured source
irradiance with the atmospheric transmittance.
In order to prevent serious errors in narrow-band spectral calculations,
all of the spectral variables must first be convolved to the same spectral
Radiometry Techniques 267

Estimated source spectrum Estimated source spectrum


2 2

(a)

0 0
Medium resolution measurement Medium resolution measurement
1 1

(b)

0 0
High resolution atmosphere Medium resolution atmosphere
1 1
(c)

0 0
Source / atmosphere Source / atmosphere
1 1

(d)

0 0

Figure 7.3 Atmospheric correction of radiometer measurements in various stages of pro-


cessing.

resolution before spectral multiplications or divisions are performed. If


the spectral variables are not all convolved to the same basic resolution,
one can obtain erroneous spectra or emissivity values, or transmittance
values exceeding unity.

7.5 The Range Equation

It is frequently necessary to determine the operational detection distance


of a source and sensor combination. The problem is usually stated as
follows: “What operating detection range can be achieved with a given
source intensity, atmospheric attenuation, and sensor sensitivity?” The
objective is to solve for R in
Iτa ( R)
E= . (7.16)
R2
Rewrite this equation as
I R2
= , (7.17)
E τa ( R)
where the left side is a constant given by the source intensity I and thresh-
old irradiance E, whereas the right side describes the range-related terms.
In most cases the solution is not simple and requires an iterative numerical
solution. 11,12 A numerical solution is shown in Section D.5.6.
The range equation can also be solved graphically by plotting the ra-
tio I/E for various atmospheric conditions, as function of range. This is
268 Chapter 7

1.0 15
γ = 0.08 km-1 0.15 0.20 0.30
13 0.50
Transmittance

Range [km]
11
γ = 0.08 km-1
0.5 9
0 15
0.20
0.50 0.30 7

0 4
11
0 5 10 15 107 108 109 1010 10
Range [km] I/E [m2]
(a) (b)

Figure 7.4 Determining range from the range equation.

shown on the right side in Figure 7.4. The vertical lines are for various I/E
values. The intercepts of the vertical I/E lines with the R2 /τa ( R) curves
indicate the detection ranges.

7.6 Pixel Irradiance in an Image

An imaging system, here referred to as the observer, forms a two-dimension-


al representation of the scene called an image. The image consists of a
number of smaller picture elements called pixels. The pixel magnitude is
proportional to the radiation received within the solid angle subtended by
each pixel. The atmosphere attenuates the scene radiance and adds path
radiance. The pixel signal magnitude (irradiance or detector voltage) is a
function of object size, range, and the relative background contribution.
In this section an analytical description of the relationship between pixel
signal magnitude and range to a given object, against a given background,
is derived.
If the solid angle subtended by the object is smaller than the pixel
solid angle, the object is said to be unresolved. If the solid angle subtended
by the object is significantly bigger than the pixel solid angle, the object
is said to be resolved against an extended target. For resolved objects, the
atmosphere degrades the contrast in the scene, whereas for unresolved
objects the pixel signal magnitude is reduced with increasing range, as well
as degraded by the atmosphere. This analysis ignores sky- and ground-
induced path radiance. As discussed in Section 4.2.2, this means that the
current derivation is only valid for horizontal paths.
Consider first a target object at some range R T1 from the observer, with
projected area AT = A T cos θT (limited to the pixel FOV), and radiance L T .
The atmospheric transmittance between the target object and observer is
τT . Consider second the background, around or behind the target object,
Radiometry Techniques 269

with projected area AB = A B cos θB (limited to the pixel FOV), at range
R B1 , the background has a radiance L B , with atmospheric transmittance to
the observer of τB . The background and the object may not be at the same
distance. The atmospheric path radiance in the direction of the object
is L P . The observer has an instrument with optical aperture A1 , and it
measures irradiance E1 = dΦ/dA1 . The pixel irradiance consists of four
components: the object irradiance, path irradiance in front of the object,
background irradiance, and path irradiance in front of the background:
L T AT τT L B AB τB L Pt AT L Pb AB
dE1 = + + + (7.18)
R2T1 R2B1 R2T1 R2B1
= L T Ω T τT + L B Ω B τB + L Pt Ω T + L Pb Ω B , (7.19)
where Ω T = AT /R2T1 is the object projected solid angle (limited to the pixel
FOV), Ω B = AB /R2B1 is the background projected solid angle (limited to
the pixel FOV), L Pb is the path radiance for the fraction of the pixel filled
by the background, and L Pt is the path radiance for the fraction of the pixel
filled by the target object. In the analysis shown here, L A is the atmospheric
radiance (as applicable to path radiance, see Section 4.2.2). The projected
pixel footprint area A P cos θP = Ω P R2 is defined as the projected area that
the pixel subtends at a given range R. This scenario is further investigated
in Sections 9.2 and D.5.3.
The pixel magnitude as a function of object-to-observer range is shown
in Figure 7.5. Depending on the object size and range, Equation (7.18) can
be recast into several different forms. At ranges corresponding to region
I the object is resolved (object larger than the pixel footprint), whereas at
ranges corresponding to regions II to IV the object is unresolved (object
smaller than pixel footprint).

Region I
The target completely fills the pixel FOV, and no background is vis-
ible. For a resolved object A P cos θP ≤ A T cos θT ⇒ A B = 0, then
A P cos θP /R2 = Ω P and
L T A P cos θP τT
E1 = + L Pt Ω P (7.20)
R2T1
= ( L T τT + L Pt )Ω P
≈ [ L T e−γRT1 + L A (1 − e−γRT1 )]Ω P . (7.21)

For a resolved object the pixel magnitude is given by the object ra-
diance multiplied by the atmospheric transmittance term. Because
the atmospheric transmittance is a function of range, the object mag-
nitude will decrease with e−γR , whereas the path radiance increases
with (1 − e−γR ) [see Equation (4.10)].
270 Chapter 7

AP = Pixel area in source plane


AT = Target area in source plane
AT
AP AP AP
AP AT AT

AT > AP AT < AP AT < AP AT = 0


10 3
Pixel irradiance [W/m2]

10 4 Target Total

10 5
Background
Path
10 6

10 7 2
10 103 104 105
Distance [m]
Figure 7.5 Pixel magnitude as a function of object-to-observer distance, as affected by the
atmosphere.

In this region, the object signal is only affected by atmospheric effects


because the object is large and observed at (relatively) close range.
When moving away from the object (i.e., for increasing R), the FOV
is constantly filled by new target area; hence the 1/R2 free space loss
is offset by observing more of the target area. As the object range
increases, the atmosphere attenuates the object radiance.

Region II
The target partially fills the pixel FOV. Then, A P cos θP > A T cos θT ⇒
A B = 0. Now, Ω T = A T cos θT /R2T1 , so that with increasing range,
Ω T decreases, and the background starts filling around the target in
the pixel FOV. Define the solid angle subtended by the background
as Ω B = Ω P − Ω T = Ω P − AT /R2T1 , then

E1 = L T Ω T τT + L B Ω B τB + L Pt Ω T + L Pb Ω B (7.22)
− γR T1 − γR T1
= [ LT e + L A (1 − e )] AT /R2T1
+[ L B e−γRB1 + L A (1 − e−γRB1 )](Ω P − AT /R2T1 ). (7.23)

In region II, the variation of the four flux components is a complex


function of R T1 , as is evident in Equation (7.23). In summary, the
object and the path irradiance in front of the object decrease with
Radiometry Techniques 271

the reciprocal of range, whereas the background irradiance increases


concomitantly.

Region III
The target fills only a very small portion of the pixel FOV, and the
background is the dominant source. For L B Ω B  L T Ω T , or Ω T very
small:
E1 = L B Ω P τB + L Pb Ω P
= [ L B e−γRB1 + L A (1 − e−γRB1 )]Ω P . (7.24)

At long range, the object flux decrease attributable to 1/R2 loss is so


severe that the object signal is less than the background signal, so
that the pixel contains mainly background and path radiance flux. In
region III the irradiance equation has the same form as in region I,
except that the target object is now replaced with the background.

Region IV
At longer range, even the background radiance is severely attenuated
and the only remaining flux is due to path radiance. Hence, for
L B τB  L P , the image pixel irradiance is given by
E1 = L P Ω P . (7.25)

At very long ranges the path radiance dominates all other sources,
and the object and clutter are lost in the ‘fog-like’ haze caused by
atmospheric radiance.

7.7 Difference Contrast

The radiance of an object in its surroundings consists of a global constant


radiance level and a local radiance variation. Some electro-optical systems
are only sensitive to the local variation in the radiance — called the dif-
ference contrast — between the object and its surroundings. The constant
radiance level is removed by filtering or some other means.
Consider a scene at uniform background temperature TB containing
one object at a temperature TO . The spectral irradiance at the sensor due
to a pixel filled only by the background is
EBλ = L Bλ Ω P τBλ + L Pbλ Ω P , (7.26)
where L Bλ is the background radiance, Ω P is the sensor pixel FOV, τBλ
is the atmospheric transmittance between the sensor and the background,
and L Pbλ is the path radiance.
272 Chapter 7

If a pixel is only partially filled by the target object, the spectral irra-
diance at the sensor is
ETλ = ( L Tλ τTλ + L Ptλ )Ω T + ( L Bλ τBλ + L Pbλ )(Ω P − Ω T ), (7.27)
where L Tλ is the object radiance, Ω T is the solid angle subtended by the
object, L Ptλ is the path radiance in front of the object, and τTλ is the atmo-
spheric transmittance between the object and the background.
The local radiance variation, or radiometric contrast, is now the differ-
ence between the pixel filled with the object and the pixel filled by back-
ground only:
ΔEλ = ( L Tλ τTλ + L Ptλ )Ω T + ( L Bλ τBλ + L Pbλ )(Ω P − Ω T )
− L Bλ Ω P τBλ − L Pbλ Ω P
= L Tλ τTλ Ω T + L Ptλ Ω T − L Bλ τBλ Ω T − L Pbλ Ω T
= ( L Tλ τTλ − L Bλ τBλ )Ω T + ( L Ptλ − L Pbλ )Ω T . (7.28)
If the target object and background are at the same range, τTλ = τBλ and
L Ptλ = L Pbλ :
ΔEλ = ( L Tλ − L Bλ )τTλ Ω T , (7.29)
hence the signal is proportional to the difference in the radiance values
between the object and the background.

7.8 Pulse Detection and False Alarm Rate

The detection of pulse signals corrupted by noise forms the basis of many
electro-optical systems. The calculation of the probability of detection and
false alarm rate requires information about the peak signal, rms noise, and
threshold setting. The general solutions 13–15 for arbitrary pulse shapes,
noise spectra, and nonlinear detection processes are very complex and are
not considered here. If the signal is a square pulse, filtered by a matched
filter, and the input noise is white with a Gaussian distribution, as is the
case for most natural noises, the detection performance can be readily 13,16
calculated. This is a special case, but, is useful to obtain at least an order
of magnitude indication.
The detection of a square pulse of width t p , immersed in white noise
after passing through a matched filter with bandwidth Δ f = 1/(2t p ), is
shown graphically in Figure 7.6. Detection is the event where the sig-
nal corrupted by noise exceeds the detection threshold. A false alarm is
the event where the noise (with no signal present) exceeds the detection
threshold. The average false alarm rate is given by
1
√ exp−it /(2in ) ,
2 2
FAR = (7.30)
2t p 3
Radiometry Techniques 273

is is
tp Matched filter 2tp

Rectangular Df = 1/(2tp)
signal pulse
Threshold Alarm
Σ detector
is + in > it
in
Matched filter
White noise Df = 1/(2tp) Threshold

Probability of
false alarm

is

it
in
0
tp
Probability of
undetected signal
in in + is in in + is

Figure 7.6 Diagram of the noise, signal, and threshold.

where it is the threshold value, and in is the rms noise value at the input
to the threshold detector.
When there is a signal present, the probability of detection (signal plus
noise exceeds the threshold) is given by
  
1 is − it
Pd ≈ 1 + erf √ , (7.31)
2 2in
where erf is the error function:
 z
2
e−t dt.
2
erf(z) = √ (7.32)
π 0

For a given combination of threshold value it , noise value in , and sig-


nal value is , the average false alarm rate and probability of detection can be
determined. Alternatively, for a given required false alarm rate or proba-
bility of detection, the threshold-to-noise ratio and SNR can be determined.
These equations can be solved numerically or graphically. 16
274 Chapter 7

As an example of the application of these formulae, consider the fol-


lowing problem: 16 The SNR and threshold-to-noise ratio (TNR) for a laser
rangefinder must be found. The laser pulses are 100 ns wide. A range
gate of 67 µs is used. One in 1000 pulses may be lost, and a false alarm
detection of 1 in 1000 pulses is required. The solution is as follows:

1. The false alarm performance must be 1 per 1000 pulses, each pulse
arriving in a 67-µs window. The false alarm rate must therefore be less
than 1/(1000 × 67 × 10−6 ) = 15 pulses per second.

2. t p FAR = 0.1 × 10−6 × 15 = 1.5 × 10−6 .

3. The TNR is calculated as


 √
it
= −2 loge 2t p 3 FAR (7.33)
in
 √
= −2 loge (2 3 1.5 × 10−6 )
= 4.93.

4. The SNR is given by


is √ it
= 2 erf−1 (2Pd − 1) + , (7.34)
in in
requiring a SNR of 8.023.
The equations can be solved by the code given in Section D.5.7.

When considering the probability of detection in an image, each pixel


is regarded as a separate channel, with a pulse width equal to its integra-
tion period. To convert from the false alarm rate per image to the false
alarm rate per pixel, divide the required system false alarm rate by the
number of pixels:
FARs
FAR p = . (7.35)
Np

The key design parameters are SNR (determines the probability of de-
tection) and the signal-to-threshold ratio (determines the false alarm rate).
In order to achieve high probability of detection, the designer strives to-
ward a high SNR.
Consider an infrared point target detection system locating hot spots
in a 4 × 105 -pixel image. The system is designed to operate at a SNR
ratio of 12. Using Equation (7.31) it is found that for log(t p FAR) = −22,
Radiometry Techniques 275

the probability of detection is 98%. This operating point, for a sensor


integration time of 1 ms, corresponds roughly with a FAR of 1 × 10−19 per
pixel, or 4 × 10−14 per frame. This corresponds to roughly 4 × 10−9 false
alarms per hour. Again, suppose the system design is done for a SNR
ratio of 12. Using Equation (7.31) it is found that for log(t p FAR) = −15,
the probability of detection is 99.99%. This operating point, for a sensor
integration time of 1 ms, corresponds roughly with a FAR of 1 × 10−12 per
pixel, or 4 × 10−7 per frame. This corresponds to roughly 4 × 10−2 false
alarms per hour, or one false alarm per 2.5 hours.
The false alarm rate will often be limited by clutter rather than by
sensor noise. Clutter will result from clouds and terrain objects, appearing
to have features similar to the target.

7.9 Validation Techniques

A short anecdotal detour is in order under this section heading. In engi-


neering school, my professor followed a policy whereby a correct answer
in the exam deserved a pass, 50%, but no more. His argument was that
you need to be correct to pass; anything less is flunking. To obtain a
distinction, your work must be distinctive, deserving the extra marks. In
particular, you must demonstrate that you know the answer is correct. Val-
idation of your answer gives you the right to rise above the rest, to present
yourself in confidence in an elite group. If you validate, your work and
your viewpoint are accepted.
Validation has little in common with radiometry, and yet it is an im-
portant step in radiometric analysis and modeling. See Section B.3 for a
very brief introduction to this very important topic.

Bibliography
[1] Lloyd, J. M., Thermal Imaging Systems, Plenum Press, New York (1975).

[2] Hovanessian, S. A., Introduction to Sensor Systems, Artech House, Nor-


wood, MA (1988).

[3] Wittenstein, W., “Thermal range model TRM3,” Proc. SPIE 3436, 413–
424 (1998) [doi: 10.1117/12.328038].

[4] Vollmerhausen, R. H. and Jacobs, E., “The Targeting Task Perfor-


mance (TTP) Metric A New Model for Predicting Target Acquisition
Performance,” Tech. Rep. AMSEL-NV-TR-230, NVESD, U.S. Army
CERDEC, Fort Belvoir, VA 22060 (2004).
276 Chapter 7

[5] Helstrom, C. W., “Performance of Receivers with Linear Detectors,”


IEEE Trans. Aerospace and Electronic Systems 26, 210–217 (1990).
[6] Hecht, E., Optics, 4th Ed., Addison Wesley, Boston, MA (2002).
[7] Nicodemus, F. E., “Normalization in Radiometry,” Applied Optics 12,
2960–2973 (1973).
[8] Palmer, J. M. and Tamasco, M. G., “Broadband radiometry with spec-
trally selective detectors,” Optics Letters 5(5), 208 (1980).
[9] DeWitt, D., “Inferring Temperature from optical radiation
measurements,” Optical Engineering 25(4), 596–601 (1986) [doi:
10.1117/12.7973867].
[10] Carmichael, G. W., “Issues in calibrating infrared seekers at low en-
ergy levels,” Proc. SPIE 344, 34–42 (1982) [doi: 10.1117/12.933748].
[11] Kaminski, W. R., “Range calculations for IR rangefinder and designa-
tors,” Proc. SPIE 227, 65–79 (1980) [doi: 10.1117/12.958748].
[12] Tomiyama, K., Pierluissi, J., and Hall, J. T., “Detection range compu-
tation by convolution for infrared detectors,” Proc. SPIE 366, 157–164
(1982) [doi: 10.1117/12.934243].
[13] Minkoff, J., Signal Processing Fundamentals and Applications for Commu-
nications and Sensing Systems , Artech House, Norwood, MA (2002).
[14] Trishenkov, M. A., Detection of Low-Level Optical Signals , Kluwer Aca-
demic Publishers, Norwell, MA (1997).
[15] Hippenstiel, R. D., Detection Theory: Applications and Digital Signal Pro-
cessing, CRC Press, Boca Raton, FL (2002).
[16] RCA Corporation, RCA Electro-Optics Handbook, no. 11 in EOH, Burle
(1974).

Problems

7.1 Explain what the range equation is and why it is important. Derive
a mathematical formulation for the range equation and elaborate
on how it can be solved. [2]
7.2 Explain what happens when a small object is viewed in an image
at different ranges. Draw a diagram that shows the pixel signal for
different ranges from very close to very far. Divide the diagram
into different regions, where different signal sources contribute to
the pixel signal. Explain the dominant source in each region. [5]
Radiometry Techniques 277

7.3 Explain the term ‘effective transmittance’ and show how it is cal-
culated. [2]
7.4 Provide a description (in words or equations, as applicable) of each
of the following terms and explain where it is used: [10]

1.Signal-to-noise ratio (unitless).



2.D ∗ - Dee-star, units [cm· Hz/W].
3.Noise equivalent power - units [W].
4.Probability of detection (unitless).
5.Point spread function (unitless).
Chapter 8
Optical Signatures

It is the mark of an instructed mind


to rest satisfied with the degree of precision
which the nature of the subject permits
and not to seek an exactness
where only an approximation of the truth is possible.
Aristotle, Nicomachean Ethics

8.1 Model for Optical Signatures

An optical signature is the manifestation of the radiometric characteristics


of an object. The signature is formed by self-emitted flux, transmitted flux,
and flux reflected from the object’s surface. The magnitude of the different
signature components depends on the state of the object itself (e.g., internal
temperature) as well as the state of its environment (e.g., incident sunlight).
The environment can also affect the long-term signature properties such
as an increase of the object’s temperature resulting from solar irradiance.
Some objects’ signatures may also depend on the internal state of the object
(e.g., aircraft engine setting).
Optical signatures also have three-dimensional spatial properties. The
object’s intensity varies with view angle around the object. The calcula-
tion or measurement of the optical signature from one view is not always
indicative of its signature from another view. Figure 8.1 shows three-
dimensional spherical plots of calculated 1 contrast intensity signatures.
The models used in these calculations are physically accurate models vali-
dated by measurement at selected view angles (Appendix B).
A conceptual description 2 of the main contributors to the apparent
radiance from a small, nominally uniform, semi-transparent Lambertian
surface with uniform surface temperature in open sunshine is shown in
Equation (8.1), and Figure 8.2. Signatures for more-complex objects can be
constructed as collections of signatures from such small areas.

279
280 Chapter 8

(a) (b) (c) (d)


Figure 8.1 Spherical plots of aircraft contrast intensity signatures: (a) fighter aircraft in 3–
5 µm, (b) helicopter in 8–12 µm, (c) fighter aircraft around 1 µm, and (d) transport aircraft
in 8–12 µm. 1

Spectral properties
Temporal
properties τoλ
ρoλ
11 12 1 oλ Sun
9
10 2
3
sλ Lλ ( Ts )
8 4
7 6 5

τsoλ τsoλ
Background
λ Atmosphere
ψ
ψρ bλ Sky
Lskyλ ψρ aλ
ρoλ
bλ Lλ ( Tb ) τaoλ aλ Lλ ( Ta )
τaboλ
Surface
Lpathλ
Atmosphere Lλ ( To ) Environment
τoλ
oλ τaλ

Lpathλ

τaλ

Spatial texture properties


τo
ρo
o

Figure 8.2 Main contributors to the radiometric signature.


Optical Signatures 281

In the real world it is impossible to fully know and describe an op-


tical signature. But with a little effort, remarkably good approximations
can be achieved. More information on signatures can be found in several
sources. 3–15
The optical signature equation is not meant to be mathematically rig-
orous, but it serves to define the various signature contributions in a con-
cise manner. The formulation in Equation (8.1) makes provision for sur-
faces with uniform radiance or for surfaces with small variations in radi-
ance. Radiance variations are described by scaling ‘textures’ that modu-
late the integrated wideband radiance value. These textures are essentially
small spatial variations in emissivity or reflectance. Virtually all of the ele-
ments in the equation are spectrally variant; spectral integrals provide the
wideband radiance:
thermally emitted Lself
   !

LS = Δ oλ (θv ) Lλ ( To )τaλ Sλ dλ
0
atmospheric path radiance Lpath
  !

+ Lpathλ Sλ dλ
0
transmitted background Ltrn back
  !

+ τoλ bλ Lλ ( Tb )τaboλ τaλ Sλ dλ
0
diffuse reflected ambient background Lref amb
    !

+ Δρ ρoλ aλ Lλ ( Ta )τaoλ τaλ Sλ dΩdλ
0 env
diffuse reflected sky Lref sky
  ∞  !
+ Δρ cos θa ρoλ Lskyλ τaλ Sλ dΩdλ
0 sky
reflected sun Lref sun
  ∞  !
+ Δρ ψ cos θs fr (θs , θv )sλ Lλ ( Ts )τsoλ τaλ Sλ dλ , (8.1)
0

where the symbols are defined in Table 8.1.


All of the spectral integrals are weighted with the sensor’s spectral
system response Sλ . Consider the individual terms in Equation (8.1) as
components in the signature:

1. Self-emitted radiance (Lself ): The object emits flux according to its di-
rectional emissivity and Planck’s law (Section 3.5). For Lambertian sur-
faces the directional emissivity is simply the diffuse emissivity. The
282 Chapter 8

Table 8.1 Terminology definition for Equation (8.1).

Symbol Meaning
LS total radiance in the wavelength band S
Lλ ( Ts ) spectral blackbody radiance, sun temperature Ts
Lλ ( Ta ) spectral blackbody radiance, environment temperature Ta
Lλ ( Tb ) spectral blackbody radiance, background temperature Tb
Lλ ( To ) spectral blackbody radiance, uniform object temperature To
Lpathλ spectral atmospheric path radiance: emitted & scattered
Lskyλ spectral sky radiance: emitted & scattered
sλ solar surface’s spectral emissivity
aλ ambient environment’s spectral emissivity
bλ background spectral emissivity
oλ (θv ) object surface directional spectral emissivity
Δ spatial texture variation in emissivity (unity if no texture)
ρoλ object surface diffuse spectral reflection
Δρ spatial texture variation in reflectivity (unity if no texture)
f r ( θs , θv ) object surface bidirectional reflection distribution function
τoλ object spectral transmittance
τaλ object-to-sensor spectral atmospheric transmittance
τaboλ background-to-object spectral atmospheric transmittance
τaoλ ambient-to-object spectral atmospheric transmittance
τsoλ sun-to-object spectral atmospheric transmittance
ψ Asun /(d2sun π) = 2.17 × 10−5
Asun area of the sun
dsun distance to the sun
θa angle between the surface normal and the vertical
θs angle between the surface normal and solar incidence
θv angle between the surface normal and the view direction
Sλ measurement instrument spectral response
Optical Signatures 283

object’s radiance is weighted by the atmospheric transmittance between


the object and the sensor.

2. Atmospheric path radiance (Lpath ): The atmospheric path between the


surface and the sensor adds radiance as described in Section 4.6.5.

3. Transmitted background radiance (Ltrn back ): Semi-transparent surfaces


transmit flux from behind the surface. The background radiance, sur-
face transmittance, and the atmospheric transmittance between the back-
ground and the surface determine flux exitant from the surface. For
opaque surfaces this contribution is zero.

4. Reflected ambient radiance (Lref amb ): It is assumed that the flux from
the environment is incident from all directions; the object is fully en-
closed by the environment. This assumption is approximately valid
when the object is indoors, but less so of the object is outdoors. How-
ever, even outdoors, the object may be immersed in the atmosphere,
which also provides at least some environmental flux (Section 4.6.7 and
Figure 4.12).

5. Reflected sky radiance (Lref sky ): Skylight is a diffuse source, caused


by scattering and emission in the atmosphere. It is difficult to model
accurately and is best measured or calculated with an atmospheric code
(Section 4.6.5 and Figure 4.12). Rayleigh scatter (in the visual spectral
range) can be spectrally approximated by a 1 × 104 -K source with low
emissivity.

6. Reflected solar radiance (Lref sun ): A simple model for reflected sunlight
is presented in Section 3.7. The reflected solar radiance depends on the
orientation of the surface, the BRDF ( fr ), and the transmittance of the
atmosphere from the sun to the surface and then from the surface to the
sensor.

8.2 General Notes on Signatures

The object’s surface emissivity  scales thermal exitance as well as the


reflection from opaque surfaces by ρ = 1 − . Surfaces with high emissivity
are poor reflectors, and vice versa. Some objects’ signatures are as much
affected by emissivity variations as by temperature variations. There is no
way to tell the difference between reflected flux or self-emitted flux in a
measured signature.
Natural ground-object temperatures are mostly in the range −10 to
+100 ◦ C. The peak thermal exitance of these targets lies in the 8–12-µm
284 Chapter 8

10
[W/(m2·sr·mm)] Reflected sunlight
8 (no atmosphere) Thermal radiance (300 K)
(no atmosphere)
Radiance

6
Reflected 27 °C, 75% RH, 23 km visibility,
4 sunlight Modtran Rural aerosol, 5 km path length
2 Thermal radiance (300 K)
0
2 4 6 8 10 12 14

2
Reflected sunlight Thermal radiance (300 K)
[W/(m2·sr·mm)]

(no atmosphere) (no atmosphere)


Radiance

1 Reflected
sunlight
Thermal radiance (300 K)

0
3 3.5 4 4.5 5 5.5 6
Wavelength [mm]

Figure 8.3 Reflected and emitted signature from a target with 80% emissivity.

spectral range. These same objects radiate a substantial signature in the 3–


5-µm spectral band as well but very little in the visual spectral band. The
practical signature of concern is often the target contrast (the difference be-
tween the target and its surroundings) rather than the absolute signature.
The relative contributions between reflected sunlight and self-emitted
infrared flux are shown in Figure 8.3. The object is viewed through a trop-
ical atmosphere (27 ◦ C, 75% relative humidity) with 23-km visibility. The
solid line represents radiation from a 300-K earth-bound object, whereas
the dotted line shows the sunlight reflected from the same surface. Note
that in the 8–12-µm band, the self-emitted flux dominates the signature,
whereas in the shorter-wavelength bands, the reflected sunlight dominates
the signature. The 3–5-µm spectral band shows equal contributions from
self-emitted and reflected sunlight for objects at ambient terrain tempera-
ture. Figure 8.3 indicates that in the 3–5-µm spectral band there is a signif-
icant infrared signal in the absence of sunlight. There is also a significant
contribution from reflected sunlight.

8.3 Reflection Signatures

The Phong Equation (3.39) provides a simple (but not very accurate) model
for BRDF:
ρo ρs (n + 1) cosn α
f r,Phong = + .
π 2π cos θi
Optical Signatures 285

Incident 1.07
Mirror
beam Aircraft white beam
White chalk
Matte blue Matte white

Green leaf
Semi-matte light grey Matte dark earth

Figure 8.4 Phong BRDF for paints and natural surfaces in the NIR spectral band.

Table 8.2 Phong BRDF parameters for paints and natural surfaces in the NIR spectral band.

Surface finish 0.75–1.4-µm NIR band


ρo ρs n
Matte dark earth paint 0.11 0.033 6
Matte blue paint 0.52 0.022 8
Matte white paint 0.58 0.03 6
Semi-matte light grey paint 0.25 0.052 44
Aircraft white paint 0.6 0.05 80
Natural soil 0.33
Sea sand 0.49
Green leaf 0.43 0.015 15
Gypsum / white chalk 0.77 0.044 3
Steel plate, freshly coated with Zn 0.05 0.39 160
Concrete 0.55

It is, however, adequate for the purpose of this book — mathematically


simple, yet sufficiently illustrative. Measurements on several paints and
natural surfaces 16 indicated BRDF values as shown in Table 8.2 and Fig-
ure 8.4. The application of this data toward a laser rangefinder perfor-
mance calculation is presented in Section 9.4.8.

8.4 Modeling Thermal Radiators

Planck’s law is not always a good approximation for real-world objects.


Such objects are better modeled as a grey body or spectrally selective ther-
mal radiators. The method described here models the source as a black-
body thermal radiator (at a specific temperature) multiplied with a spectral
286 Chapter 8

emissivity and an effective area. This approach applies only to thermal


sources complying with Planck’s law. The procedure assumes a single,
uniform, radiating element. Complex target objects must be modeled as a
collection of uniform radiating elements.
Equation (6.13) describes the flux flowing from the radiator to the
receiver by an equation of the form (simplified here to retain only the
essential information):

A0 A1 ∞
Φ= 0λ L0λ ( Tbb )τmλ dλ, (8.2)
R201 0
where L0λ ( Tbb ) is a Planck-law radiator at temperature Tbb , and the trans-
mittance factor τmλ = τaλ τsλ αsλ collects the various transmittance factors as
a single variable. The modeling objective is to find A0 , Tbb , and 0λ , given
all of the other (measured and given) information. These three parameters
are sufficient to characterize and model any thermal radiator.
Signature modeling requires information and knowledge beyond just
the measured data. There must be some indication of the temperature or
area (however vague). Emissivity is bounded by zero and one, but even
within this range, some information should be available. The additional
information is required to (a) guide the model analysis and (b) provide
partial validation for the model results.
Equation (8.2) is a mathematical statement and many different com-
binations of A0 , Tbb , and 0λ provide equally valid mathematical solutions
to the equation. For example, for different values of source temperature,
corresponding values of spectral emissivity can be calculated, all of which
are valid in a mathematical sense. However, modeling requires physically
viable solutions, not just mathematically valid solutions. It follows that
only a few mathematical solutions are physically viable solutions, and of
these, only one is a valid physical solution. Section 9.8 investigates this
approach in a practical example.
Extrapolation is generally not safe, both in a mathematical and phys-
ical sense. There is, however, an implied requirement to perhaps slightly
extrapolate around the calculated model parameters. If the model parame-
ters are a valid physical solution, limited extrapolation should be in order.
If, however, the model parameters do not represent a valid physical solu-
tion, any extrapolation is in error.
Some radiators (gas plumes and flames) have a nebulous nature; the
area depends on the radiance levels considered. If only the most intense
part of the flame is measured, the area is very small, whereas the area
can be large if all low-radiance regions are also included. This problem is
investigated in Section 8.4.2.
Optical Signatures 287

Practical instruments measure and represent the optical flux as a volt-


age or digital count, effectively implementing Equation (6.16). A process
known as calibration 5 is used to determine the relationship between the
measured voltage and the flux on the detector ΦS = f (vS ) in the band S .
This relationship is known as the instrument function.
With the preliminaries in place, the modeling effort should proceed
along these steps:

1. Estimate a mathematically valid spectral emissivity.

2. Using the estimated spectral emissivity, calculate [using Equation (8.2)]


any appropriate combination of temperature and area that would yield
the measured results.

3. Select/confirm the area and temperature by measuring at least one of


the two parameters in some other way.

4. Review the combination of spectral emissivity, temperature, and area


for physical validity (or at least physical viability). Repeat the process
if one of the parameters is not valid.

This process entails an element of uncertainty and guesswork. The uncer-


tainty in itself does not invalidate the process; it is still a useful and valid
process. However, accurate results require that assumptions be verified
and model parameters validated.

8.4.1 Emissivity estimation

A spectroradiometer can be used to measure the spectral radiance of the


target source. After conversion by the instrument function, the radiometer
provides radiance measurement according to Equation (6.16), but for a
very narrow spectral width λ2 − λ1 = δλ → 0,
 Zt dA0 cos θ0 A1 0λ L0λ ( Tbb ) τaλ Sλ
kR
Lmλ = δλ , (8.3)
R201
where Lmλ is the measured radiance, and k is the instrument calibration
function (assumed here to be a simple gain factor). The instrument func-
tion calibration assumes that the radiator fills the complete radiometer
FOV. This assumption is not always met, in which case the calculation
must be adapted to account for the instrument’s response to a partially
filled FOV. If the target source has much higher radiance than the back-
ground, the background can be ignored. In this case, the spectral emissiv-
ity shape will be correct, but its absolute scaling will be unknown.
288 Chapter 8

In order to determine the spectral emissivity, Equation (8.3) must be


inverted to solve for 0λ . This cannot be done analytically but is easily
achieved by numerical methods. Because the spectroradiometer measures
the spectral radiance Lλm directly, the spectral emissivity can be determined
by solving
Lmλ = 0λ L0λ ( Tbb ) (8.4)
by dividing both sides by the blackbody radiance
Lmλ
0λ = . (8.5)
L0λ ( Tbb )
The key step is to select a source temperature Tbb that yields reasonable
emissivity values. From the definition of emissivity, emissivity may not
exceed unity or be negative. Note that, within physically viable limits,
any choice of peak emissivity combined with temperature can be made.
There is usually some a priori knowledge of the spectral emissivity or
temperature that would forward a preferred solution. See Section 9.8 for a
practical application of this principle.
Solids normally have an expected range of emissivity values. Also,
over a relatively narrow spectral range, the infrared emissivity for a solid
object is expected to be more or less constant — this observation may help
in estimating the temperature. If the source target is an opaque material
known to have near-constant spectral emissivity, the physically valid tem-
perature is the value that yields a near-constant spectral emissivity.
If there are large quantities of radiating species in a gas, it may be
assumed that the gas is optically thick at some wavelength. An optically
thick gas radiator would result in an emissivity near unity at some wave-
length. The objective is then to determine the temperature that yields the
appropriate maximum emissivity value. Open yellow flames may have
emissivity values between 0.0 and 0.2, (or even higher) depending on the
concentration of solids (e.g., carbon soot) in the flame.
The calculated emissivity and its associated temperature are taken as
the first two model parameters. The closer any one of these two parameters
are to the real physical situation, the more accurate the model will be in
physical realism.

8.4.2 Area estimation

If the radiating object is a uniformly radiating solid, it is relatively easy to


determine the area. However, if the solid body does not radiate uniformly
over its surface, or if the radiator is a gas, the radiating area [in terms of
Equation (8.2)], is not well defined.
Optical Signatures 289

The magnitude of ‘radiator area’ depends on the exact definition of


what is meant by ‘area.’ If a thermal image of the source is available, three
radiator area definitions are possible:

1. Set a radiance threshold at some level and ignore all pixels in the image
with values below this threshold. The problem with this approach is
that the selected pixels may have widely varying radiance values, from
very cool to very hot. A more accurate approach is to segment the object
image into several regions, each with a limited range of pixel radiance
levels (somewhat similar to a terrain contour map).

2. Set a spatial limit to the size of the radiator and ignore all pixels out-
side the selected size. This approach may reject a significant amount of
radiation. However, if the spatial solid angle is selected carefully, this
approach may yield a good estimate of the useful radiator area.

3. Integrate the flux over a large spatial extent and at all radiance levels
— use as large an area as feasible. Peak normalize (Section 7.2.3) the
measured flux by assuming the highest radiance level for the whole
radiator area, and then calculate an ‘area’ to provide the same flux as
the integrated measured flux.

Using the spectral emissivity calculated in the previous section, sets


of effective areas and temperatures can be calculated that would yield the
required measured flux. In other words, given 0λ , find A1 and Tbb that
would solve Equation (8.2) for the measured data. By investigating the
balance between temperature and area, one may develop a better under-
standing of the radiating nature of the target object.
Figure 8.5 shows 3–5-µm radiance images for a flame measurement
obtained with a Cedip MWIR thermal camera. The left figure shows the
original image. The six images on the right show the flame at increasing
threshold levels, from a low value (just above the background) to a value
near the peak of the image. These images were obtained by setting thresh-
old levels to exclude lower pixel values from the image (called greylevel
segmentation of the image). The threshold radiance value and size of the
segmented part of the image is displayed with each picture. The code for
this analysis is shown in Section D.5.5.
The flame size was calculated using the simple threshold segmenta-
tion method and also the peak normalized method. The results are shown
in Figure 8.6. Note that the peak normalized area is almost five times
smaller than the simply threshold-segmented area at the lowest threshold
value.
290 Chapter 8

Flame radiance [W/(m2·sr)] LThrshld 296 [W/(m2·sr)] LThrshld 602 [W/(m2·sr)]


10 000 Size 0.685 m2 Size 0.375 m2
Images displayed in log(radiance)

3 162

LThrshld 1228 [W/(m2·sr)] LThrshld 2501 [W/(m2·sr)]


1 000 Size 0.260 m2 Size 0.190 m2

316.2

LThrshld 5091 [W/(m2·sr)] LThrshld 10360 [W/(m2·sr)]


100
Size 0.120 m2 Size 0.015 m2

31.6

Figure 8.5 Measured flame radiance in the 3–5-µm spectral band for six different threshold
levels.

0.7
0.6
Flame size [m2]

0.5 Threshold segmentation only

0.4
0.3 Threshold segmentation and peak normalized
0.2
0.1
0
0 2000 4000 6000 8000 10000 12000
Threshold [W/(m2·sr)]

Figure 8.6 Flame-size predictions as a function of segmentation threshold.

Inspection of the segmented pictures in Figure 8.5 might lead the an-
alyst to decide that the flame area is approximately 0.2 m2 , corresponding
to a threshold of 2500 W/(m2 ·sr). In this case the area is selected on the
basis of the analyst’s evaluation of the threshold-segmented pictures.
The analysis indicates that the flame has large areas of relative low
radiance, which contributes relatively little to the total signature. Depend-
ing on the application, different flame areas can be chosen. If shape is not
important, the flame could be modeled as a simple uniform radiator with
area of 0.14 m2 . If shape is important, the radiance gradient across the
flame must be accounted for, and the model becomes more complex.

8.4.3 Temperature estimation

Temperature measurement is a nontrivial measurement activity. 17,18 An


object may even possess different temperatures, depending on the process
Optical Signatures 291

or instrument used to measure the temperature. When building signa-


ture models, temperature estimation can be assisted by setting constraints
on the temperature range. Systematic narrowing of these constraints will
result in a band of conceivable temperatures. Prior experience, intensive
literature searches, or expert guidance is essential in this process. This
section describes two techniques that require access to a spectral radiance
measurement of the test sample.
When estimating the temperature, emissivity presents a constraint by
its unity maximum value. If Lλm is a measured spectral radiance, and
Lbbλ ( Tm ) is the blackbody radiance at temperature Tm , the following equa-
tion sets the lower temperature constraint:
Lmλ
λm = ≤ 1. (8.6)
Lbbλ ( Tm )
Figure 9.19 shows a case where a too-low temperature estimate requires
an emissivity exceeding unity, which is impossible.
If an object is known to have a near-constant spectral emissivity, the
object’s temperature can be roughly estimated by matching the object’s
spectral radiance to a scaled version of the Planck-law equation. In this
case, the scale factor will be the emissivity of the object. Figure 8.7 shows
the spectral radiance of two solid surfaces, both measured and calculated.
The measured radiance values were not corrected for atmospheric trans-
mittance. Superimposed on the measured data are shown several calcu-
lated Planck-law curves. The curve that fit best is an indication of the tem-
perature of the surface. This approach is sensitive to spectral atmospheric
transmittance effects and any variation in emissivity of the sample. In
Figure 8.7(a), the calculated radiance values were scaled to the measured
radiance value at 3.95 µm. In Figure 8.7(b), all of the calculated values were
scaled by 0.85, the estimated emissivity of the plate. This method was used
to determine the temperature of a carbon-rich flame [Magnesium-Teflon® -
Viton® (MTV) flare] with reasonably good success.
Two-color or multi-color temperature measurement 17 is a variation of
the technique described above. The object’s radiance is measured in two
(or more) spectral bands, and the ratio between the bands is an indication
of the temperature. The two spectral bands must be selected to provide
sufficient variation in color ratio over the desired temperature range. Two-
color ratio measurement is used successfully to discriminate between air-
craft and MTV flares in two-color missile seekers. In this application the
objective is not accurate temperature measurement but rather to provide a
threshold for decision making.
Temperature estimation depends heavily on the use of reference or
292 Chapter 8

Radiance [W/(m2·sr·mm)] 40
Blackbody source
Instrument setting 150 °C
30 Thermocouple 150.5 °C
200 °C
Scaled to measured
20 155 °C
value at this point
150 °C
145 °C
10
100 °C
Measured
0
2 2.5 3 3.5 4 4.5 5 5.5
Wavelength [mm]
(a)
10
Radiance [W/(m2·sr·mm)]

Painted metal plate


Calculations for emissivity of 0.85
Thermocouple 82.7 °C
60 °C
5 70 °C
75 °C
80 °C
90 °C
Measured
0
2 2.5 3 3.5 4 4.5 5 5.5
Wavelength [mm]
(b)
Figure 8.7 Comparison of measured and calculated radiance for solid objects: (a) blackbody
and (b) painted metal plate.

external information. 19 Chemists can often calculate or predict the tem-


perature of burning chemical compounds. 20 Internet search results should
be used with caution, especially with regards to information on the tem-
perature of flames.
The effect of surface emissivity on apparent temperature is discussed
in Section 3.2.4.

8.5 Measurement Data Analysis

‘Same as’ measurements are commonly employed in infrared measure-


ments. The object radiance is compared with the radiance measured from
a known calibration source. The process can be best described as follows:
The object was observed at long range, and the measured irradiance was compara-
ble to the irradiance observed when viewing a known calibration source at a short
range. The following information is typically available: the radiometer
Optical Signatures 293

FOV, spectral response, and the meteorological data during the measure-
ment. This procedure also requires that some estimate of the object spectral
emissivity be known (either by measurement or research investigation).
The purpose of this data analysis calculation is to determine the object
temperature, given the object spectral emissivity, area, and atmospheric
conditions at the time of the measurement. The intention is to calculate the
temperature directly from the measured data with no intermediate steps.
This approach requires a mathematical formulation that can be solved nu-
merically.
Note that the temperature so determined is not necessarily a phys-
ically valid temperature. It is the value mathematically required by the
chosen area and emissivity to provide the measured signal. If the area
and emissivity values are physically valid, then the temperature is also a
physically valid temperature.
Equation (6.16) describes the signal voltage at the output of the ra-
diometer and is simplified to
 Zt A 0 A 1  ∞
kR
vS = 0λ L0λ τaλ Sλ dλ. (8.7)
R201 0

The test results are reported in terms of two measurements giving the
same voltage in the instrument. The first voltage is the calibration voltage
when observing a known source. The second voltage is the measurement
of the target source.
Assuming that the calibration is done with a blackbody source with
emissivity close to unity, over a very short distance, the voltage for the
calibration measurement is
 ∞
 Zt Ω p A 1
vc = k R L0λ ( Tc )Sλ dλ, (8.8)
0
where the instrument FOV Ω p is filled by the calibration source, Tc is the
blackbody temperature, and vc is the instrument output voltage during
this calibration.
From Equations (8.1) and (8.7), the voltage for the object measurement
is
thermally emitted Lself
   !
 Zt A 0 A 1
kR ∞
vS = Δ oλ (θv ) Lλ ( To )τaλ Sλ dλ
R201 0
atmospheric path radiance Lpath
  !

 Zt Ω p A 1
+k R Lpathλ Sλ dλ
0
294 Chapter 8

transmitted background Ltrn back


  !
 Zt A 0 A 1
kR ∞
+ τoλ bλ Lλ ( Tb )τaboλ τaλ Sλ λ
R201 0
diffuse reflected ambient background Lref amb
    !
 Zt A 0 A 1
kR ∞
+ Δρ ρoλ aλ Lλ ( Ta )τaoλ τaλ Sλ dΩdλ
R201 0 env
diffuse reflected sky Lref sky
 ∞  !
 Zt A 0 A 1
kR
+ Δρ cos θa ρoλ Lskyλ τaλ Sλ dΩdλ (8.9)
R201 0 sky
reflected sun Lref sun
  ∞  !
 Zt A 0 A 1
kR
+ Δρ ψ cos θs fr (θs , θv )sλ Lλ ( Ts )τsoλ τaλ Sλ dλ .
R201 0

Because the radiance during the measurement is the same as the ra-
diance during calibration, now equate the values. Furthermore, if the tar-
get area exceeds the size of the pixel footprint (i.e., an extended source),
As /R201 becomes the instrument FOV Ω p , and then the equation can be
simplified considerably because the geometric factors are the same for all
of the terms. Assuming that all of the other parameters are known, the
target source temperature can now be obtained by solving for To in
calibration radiance
  !

0 = − L0λ ( Tc )Sλ dλ
0
thermally emitted Lself atmospheric path radiance Lpath
   !   !
∞ ∞
+ Δ oλ (θv ) Lλ ( To )τaλ Sλ dλ + Lpathλ Sλ dλ
0 0
transmitted background Ltrn back

  !

+ τoλ bλ Lλ ( Tb )τaboλ τaλ Sλ dλ
0
diffuse reflected ambient background Lref amb
    !

+ Δρ ρoλ aλ Lλ ( Ta )τaoλ τaλ Sλ dΩdλ
0 env
diffuse reflected sky Lref sky
  ∞  !
+ Δρ cos θa ρoλ Lskyλ τaλ Sλ dΩdλ
0 sky
reflected sun Lref sun
  ∞  !
+ Δρ ψ cos θs fr (θs , θv )sλ Lλ ( Ts )τsoλ τaλ Sλ dλ . (8.10)
0
Solving this form of the equation is challenge because of the large number
Optical Signatures 295

of parameters that must be known. In practice, several simplifications are


commonly made, as shown in the next few sections.

8.6 Case Study: High-Temperature Flame Measurement

Consider the case where the radiometer is observing a flame. The flame
temperature is much higher than the surrounding background and the
atmosphere. It is known that flames have high transmittance and low
emissivity. Suppose further that path radiance can be ignored. Several
terms in Equation (8.10) are therefore discarded. The problem can then be
stated as follows:
calibration radiance thermally emitted Lself
  ∞  !    !
A0 ∞
Ωp L0λ ( Tc )Sλ dλ = Δ oλ (θv ) Lλ ( To )τaλ Sλ dλ, (8.11)
0 R201 0

where the requirement for an extended target size has been removed. A
smaller target size results in a target solid angle smaller than the instru-
ment solid angle. This may occur during practical measurements where
flames are measured over long range. The solution to Equation (8.11) can
be readily obtained by numerical analysis, but it does require a known
spectral emissivity (see Section 8.4.1).
Two important observations can be made on the methodology given
here: (1) the atmospheric transmittance is compensated for by multiplica-
tion (no divide-by-zero errors)- and (2) the absolute value of the radiometer
system response does not appear in the solution; only the spectral system
response Sλ is required.

8.7 Case Study: Low-Emissivity Surface Measurement

The emissivity of shiny metallic surfaces are lower than the emissivity of
rough Lambertian surfaces. The effect of this low emissivity on tempera-
ture measurements should be carefully considered. Suppose the irradiance
from a metal surface is measured over a short distance, as an extended
source, with the intention to determine the temperature of the surface. For
this analysis start with Equation (8.10), assume an extended target, and
ignore the terms for path radiance, transmitted background flux, reflected
sky, and reflected sunlight. Keep the terms for self-exitance and terrain
background. The terrain background in this case is meant to model an
enclosed volume, such as in the laboratory. Note that for an opaque sur-
face, ρoλ = (1 − oλ ). This means that as the self-radiance decreases due to
lower emissivity, the reflected ambient radiance increases due to reflection.
296 Chapter 8

200
True Temperature [°C]
3 5 mm  0.1

temperature
160  0.2

Ambient
120

80  1.0
40
 1.0
0
0 10 20 30 40 50 60 70 80 90 100
Apparent temperature [°C]
True Temperature [°C]

200
8 12 mm  0.1

temperature

160 0.2
Ambient

120
80  1.0
40
 1.0
0
0 10 20 30 40 50 60 70 80 90 100
Apparent temperature [°C]

Figure 8.8 Apparent versus real temperature for different source emissivity values, ranging
from 1 (straight line) down to 0.1 (most curved line) in steps of −0.1, in the 3–5-µm and
8–12-µm spectral bands.

The source temperature can now be determined by solving


measured radiance thermally emitted Lself
  !   !
∞ ∞
L0λ ( Tc )Sλ dλ = oλ Lλ ( To )τaλ Sλ dλ
0 0
diffuse reflected ambient background Lref amb
  !

+ (1 − oλ )aλ Lλ ( Ta )τaoλ τaλ Sλ dλ . (8.12)
0

Equation (8.12) is now applied in a somewhat different manner. The


right side is an accurate calculation of the signature as a function of tar-
get emissivity. Suppose the left side represents a noncontact temperature
measurement instrument, but calibrated to an assumed target emissivity
of one. Then Tc will represent the measured temperature of the test target.
This apparent temperature Tc will be incorrect because of the incorrectly
assumed unity emissivity. The physical temperature versus the erroneous
observed temperature, as a function of object emissivity, is shown in Fig-
ure 8.8.
Optical Signatures 297

8.8 Case Study: Cloud Modeling

The objective with this case study is to derive a first-order empirical model
for the radiance in the ‘silver lining’ edge of a back-lit cloud. The problem
is solved by analyzing measured data and compiling a simple model. In
this instance the only available measured information was a ‘temperature’
measurement in imaging-camera images. No other infrared or calibration
information was available. The modus operandi was to first calculate the
radiance from the measured temperature data and second to build a model
around the calculated radiance values.

8.8.1 Measurements

Measurements were made at varying angles from the sun, ranging from
10 deg to 90 deg from the sun. The distance was measured with a laser
rangefinder and varied from one to three kilometers. The measurement
was performed from ground-level at 1500 m above sea level. The ground
level ambient temperature during the measurements was 20 to 25 ◦ C. In
all of the subsequent calculations the Lowtran (precursor to Modtran™)
Mid-latitude Summer model was used. The atmospheric transmittance
and path radiance values are shown in Figures 8.9(b) and 8.9(c).
Measurements were performed with an InSb MWIR camera fitted
with an antisolar filter. This filter suppresses all radiation at wavelengths
shorter than 3 µm. The camera was set to assume a target emissivity of
one. The vendor-supplied software was used to read temperature values
directly from the image, so this analysis starts with a measured tempera-
ture value as input. The camera instrument function was calibrated against
a blackbody simulator at close range under laboratory conditions.
The measurements indicated that a typical side-lit cloud has an appar-
ent temperature of between 10 and 20 ◦ C (Table 8.3). Back-lit clouds had
silver linings and small hot spots with apparent temperatures between
30 ◦ C and 50 ◦ C. This increase in apparent temperature is due to forward
scattering by the silver lining — not a high cloud temperature. The first
step is to calculate the cloud radiance values from the recorded tempera-
tures. The cloud radiance is calculated using the calibration equation
 ∞
Lcloud = 0λ L0λ ( Tm )Sλ τmλ dλ, (8.13)
0
where 0λ = 1 is the calibration source emissivity, Tm is the cloud temper-
ature as indicated by the camera, Sλ is the normalized camera response
shown in Figure 8.9(a), and τmλ is the atmospheric transmittance during
calibration.
298 Chapter 8

Table 8.3 Measured and modeled cloud data.

Cloud Apparent Measured Silver-lining factor ζ Model


sample temperature radiance Tc = 10 ◦ C Tc = 0 ◦ C radiance
Side-lit 10 1.34 1 1 1.16
Back-lit 30 2.7 20 23 2.7
Back-lit 50 5 50 51 5
Path 0.53
◦C W/(m2 ·sr) - W/(m2 ·sr)

8.8.2 Model

In order to describe the forward scattering in the cloud’s silver lining,


define a scaling factor ζ called the silver-lining factor. The silver-lining
factor is the ratio by which a back-lit cloud transmits (scatters) more than
a side-lit cloud would reflect. It is calculated on the assumption that the
spectral signature is the same for reflected and forward-scattered light. The
silver-lining factor value is never less than unity. A cloud with no forward
scattering, i.e., a side-lit cloud, has a silver-lining value of unity. As the
forward scattering increases, silver-lining scatter also increases. Note that
the silver-lining factor is not directly based on any physical theory; it is an
empirical model.
Start from Equation (8.1), and remove all of the terms except those for
self-radiance, path radiance, sky radiance, and reflected sunlight. The solar
incidence angle cos θs = 1 and the object normal angle cos θa = 1 because
the cloud is an extended target. Furthermore, the cloud is a diffuse spectral
reflector; hence f r (θs , θv ) = ρoλ , and oλ (θv ) is a diffuse spectral emissivity.
The cloud is opaque, hence ρoλ = (1 − oλ ). The cloud radiance model is
then
thermally emitted Lself reflected sun Lref sun
    !  !
∞ ∞
LS = (1 − ρoλ ) Lλ ( To )τaλ Sλ dλ + ζψ ρoλ sλ Lλ ( Ts )τsoλ τaλ Sλ dλ
0 0
path radiance Lpath diffuse reflected sky Lref sky
  !    !
∞ ∞
+ Lpathλ Sλ dλ + ρoλ Lskyλ τaλ Sλ dΩdλ, (8.14)
0 0 sky

where To is the cloud temperature, and the remaining parameters are as


previously defined. The spectral cloud reflectance ρoλ is shown in Fig-
ure 8.9(a).
Optical Signatures 299

(a) Sensor response, cloud reflectance and emissivity


1.0
Relative magnitude

0.8
Cloud emissivity Sensor
0.6

0.4
0.2 Cloud reflectance
0
3 3.5 4 4.5 5 5.5 6

(b) Atmospheric Transmittance


1.0
Cloud-to-observer
Transmittance

0.8 distance 2970 m


0.6 Laboratory
0.4 distance 2 m
Sun-to-cloud
0.2 zenith angle 45°
slant path to space
0
3 3.5 4 4.5 5 5.5 6

10-3 (c) Path and sky radiance [W/(m2·sr·cm-1)]


10
Modtran atmosphere:
8 Mid-latitude summer
Radiance

6 23-km visibility Rural aerosol Sky radiance


Single-scatter aerosol
4 1500-m ASL
2 Path radiance
0
cloud-to-observer
3 3.5 4 4.5 5 5.5 6
2 -1
10-3 (d) Side-lit-cloud apparent radiance [W/(m ·sr·cm )]
4
Total
3
Radiance

Thermal
2

1
Reflected sunlight
Path + sky
0
3 3.5 4 4.5 5 5.5 6
10-3 (e) Severe back-lit-cloud apparent radiance [W/(m2·sr·cm-1)]
12
Radiance

8 Total
Reflected sun light
Thermal
4
Path + sky

0
3 3.5 4 4.5 5 5.5 6
Wavelength [mm]

Figure 8.9 Cloud model analysis spectral input data and results.
300 Chapter 8

Equation (8.14) was used to determine the silver-lining factor, using


the measured cloud radiance values and cloud temperatures of 0 ◦ C and
10 ◦ C. The results are shown in Table 8.3. The results indicate that the
model predicts side-lit cloud radiance 13% lower than the measurement.
It is evident that the silver-lining factors are fairly robust against cloud
temperature variations. These silver-lining factors compare favorably with
silver-lining factors in the visual spectral band, which can be up to 70 for
view angles close to the sun.

8.8.3 Relative contributions to the cloud signature

It is interesting to observe the relative contributions of the self emission


and path radiance to the reflected sunlight. Figures 8.9(d) and 8.9(e) in-
dicate each component individually. Path radiance plays a large role in
the magnitude of the signature. For side-lit clouds the thermal compo-
nent dominates, and for back-lit clouds the scattered or reflected sunlight
component dominates.

8.9 Case Study: Contrast Inversion/Temperature Cross-Over

The temperature of objects in a natural scene depends on the diurnal heat


flow, mainly following solar and seasonal patterns. Consider the thermal
history of objects in an open terrain:

1. Early in the morning, all objects have cooled down by radiation, con-
duction, and convection. Temperature contrast is low.

2. At sunrise, the absorbed sunlight raises the surface temperatures of


some objects very quickly. During the few minutes after sunrise, the
infrared signatures in the scene change very quickly. Some materials
have low thermal conductance between the surface layers and deeper
thermal capacitance. A thin layer on the surface of these objects heats
up very quickly because the heat flow into the surface is higher than the
heat flow out of the surface (by radiation and conduction). The effect is
quite dramatic during low-contrast conditions just before sunrise. This
is a surface-temperature rise effect, not a bulk temperature effect.

3. By mid-morning, the objects have received a strong injection of energy


from the sun. Some objects (e.g., rocks, metal, roads) respond quicker
to the solar influx and heat up more rapidly than other objects (vege-
tation, wet soil). The result is that the average bulk temperature is still
relatively low, but the thermal contrast is high.
Optical Signatures 301

50
Temperature [°C]

40
Sheet-metal roof
30

20 Grass field
Glass panel
10

0
0 2 4 6 8 10 12 14 16 18 20 22 24
Diurnal cycle [h]

Figure 8.10 Temperature inversion caused by time lag in object temperatures.

4. In mid-afternoon, all objects are hot, having absorbed heat all morning.
The contrast between the objects is, however, decreasing because most
objects have been heated by now.

5. In early evening, the objects with low heat capacity have lost much
of their energy by radiation, whereas objects with high heat capacity
are still relatively warm. As the temperature decreases, the contrast
increases slightly until all objects have lost most of their excess energy.

The temporal behavior of each object is unique because objects have


different heat capacities, thermal conductances, and heat exchange with
other objects. Each object therefore has a unique thermal history.
At least twice during this diurnal cycle, two objects may have the
same radiance and hence have zero contrast with respect to each other —
the edges between the objects disappear in a thermal image. This is known
as thermal or temperature crossover. In practice, in complex or rich scenes,
emissivity differences and small temperature differences between objects
remain, rendering at least some part of the object visible. However, if the
scene does not contain much detail (such as in a sand desert), even the
distinction between the sky and ground may fade.
The morning crossover effect is much stronger than the evening crossover
effect because by the early evening there are still significant amounts of
heat stored in the bulk of the objects’ bodies, leading to a wider tempera-
ture distribution in the scene.

8.10 Case Study: Thermally Transparent Paints

A paint with the required visual properties, but which is transparent in the
thermal bands, could be useful in a multi-spectral signature management
302 Chapter 8

Emissivity 1.0

Thin layer of paint


0.5 Thick layer of paint

0
2 4 6 8 10 12
Wavelength [mm]

Figure 8.11 Emissivity of a metal plate painted with a thermally transparent paint.

(camouflage) design. The object would have the appropriate visual colors,
but it would reflect the ambient surroundings’ infrared signature from a
shiny metal subsurface.
One such thermally transparent paint was characterized after having
been applied to a clean, polished aluminium plate. The resulting emissiv-
ity value is shown in Figure 8.11. Note the low emissivity in the 3–5-µm
spectral band arising from the fact that the paint is transparent in the 3–
5-µm spectral band, and the polished subsurface is observed. Note also
that the 8–12-µm spectral band emissivity is still too high to be of practical
value.
The dependency of emissivity on paint thickness indicates that the
paint base is not fully transparent. A thick coating (> 150 µm) will ef-
fectively cover any metal surface underneath but with emissivity equal to
other paints.

8.11 Case Study: Sun-Glint

The 3–5-µm spectral band contains sunlight, as well as thermally emitted


energy. Figure 8.12 shows a measurement of sun-glint in a harbor, in the
direction of the sun, on a calm day. The probability of observing this severe
sun-glint is relatively low because the sun elevation angle must be low the
sensor must be pointed in the direction of the sun, and the sea must be
calm. Most observations will result in less sun-glint than is demonstrated
here. It is evident in Figure 8.12 that sunlight dominates at the shorter
wavelengths. Sun-glint is also sometimes visible in the 8–12-µm spectral
band but at reduced levels relative to the 3–5-µm spectral and visual bands.
Sun-glint is less common in nonmaritime scenarios but does occur
from specularly reflecting objects, such as galvanized steel roof plates, in-
land bodies of water, ice caps, or similar shiny surfaces.
Optical Signatures 303

Radiance [W/(m2·sr·mm)] 10
Scaled solar irradiance
8

6 Sunlight reflection (glint)

4
Thermal
2 self-exitance

0
3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5
Wavelength [mm]

Figure 8.12 Sunlight reflection of the sea surface (glint), sun at low elevation, looking in the
direction of the sun.

Bibliography
[1] Pyradi team, “Pyradi Radiometry Python Toolkit,” http://code.
google.com/p/pyradi.

[2] Willers, C. J., Willers, M. S., and Lapierre, F. D., “Signature modelling
and radiometric rendering equations in infrared scene simulation sys-
tems,” Proc. SPIE 8187, 81870R (2011) [doi: 10.1117/12.903352].

[3] Wolfe, W. L. and Zissis, G., The Infrared Handbook, Office of Naval
Research, US Navy, Infrared Information and Analysis Center, Envi-
ronmental Research Institute of Michigan (1978).

[4] Accetta, J. S. and Shumaker, D. L., Eds., The Infrared and Electro-Optical
Systems Handbook (8 Volumes), ERIM and SPIE Press, Bellingham, WA
(1993).

[5] Palmer, J. M. and Grant, B. G., The Art of Radiometry, SPIE Press,
Bellingham, WA (2009) [doi: 10.1117/3.798237].

[6] Jacobs, P. A., Thermal Infrared Characterization of Ground Tar-


gets and Backgrounds, SPIE Press, Bellingham, WA (1996) [doi:
10.1117/3.651915].

[7] Mahulikar, S. P., Sonawane, H. R., and Rao, G. A., “Infrared signature
studies of aerospace vehicles,” Progress in Aerospace Sciences 43(7-8),
218–245 (October-November 2007).

[8] Ferwerda, J. G., Jones, S. D., and Reston, M., “A free online refer-
ence library for hyperspectral reflectance signatures,” SPIE News-
Room (Dec 2006).
304 Chapter 8

[9] Johansson, M. and Dalenbring, M., “Calculation of IR signa-


tures from airborne vehicles,” Proc. SPIE 6228, 622813 (2006) [doi:
10.1117/12.660108].

[10] Hudson, R. D., Infrared System Engineering, Wiley-Interscience, New


York (1969).

[11] Roblin, A., Baudoux, P. E., and Chervet, P., “UV missile
plume signatures model,” Proc. SPIE 4718, 344–355 (2002) [doi:
10.1117/12.478822].

[12] Neele, F. and Schleijpen, R., “UV missile plume signatures,” Proc.
SPIE 4718 (2002).

[13] Mahmoodi, A., Nabavi, A., and Fesharaki, M. N., “Infrared


image synthesis of desert backgrounds based on semiempirical
thermal models,” Optical Engineering 40(2), 227–236 (2001) [doi:
10.1117/1.1337037].

[14] Rapanotti, J., Gilbert, B., Richer, G., and Stowe, R., “IR sensor design
insight from missile plume prediction models,” Proc. SPIE 4718 (2002)
[doi: 10.1117/12.478816].

[15] Magalhães, L. B. and Alves, F. D. P., “Estimation of radiant intensity


and average emissivity of Magnesium/Teflon/Viton (MTV) flares,”
Proc. SPIE 7662, 766218 (2010) [doi: 10.1117/12.850617].

[16] van den Bergh, J. H. S., “Specular reflection,” private communication


(2004).

[17] Michalski, L., Eckersdorf, K., Kucharski, J., and McGhee, J., Tempera-
ture Measurement, 2nd Ed., John Wiley and Sons, New York (2001).

[18] Strojnik, M., Paez, G., and Granados, J. C., “Flame thermometry,”
Proc. SPIE 6307, 63070L (2006) [doi: 10.1117/12.674938].

[19] Liberman, M., Introduction to Physics and Chemistry of Combusion ,


Springer, Berlin (2008).

[20] Glassman, I., Combustion , Academic Press, San Diego, CA (1987).

Problems

8.1 Consider a simple model for an aircraft consisting of three opaque


components: a rectangular solid (the fuselage), a disc (the tailpipe),
and a solid cylinder (the plume), as shown in the following figure:
Optical Signatures 305

Surface A5: head on view Surface A6: rear view


Cylinder
1.4 m C1: plume

Nose facing
to front
0.7 m

1.4 m
6m

12 m
Surface B1: tailpipe
Surface A1: side view
Surface A2: bottom view
Surface A3: side view
Surface A4: top view

with the following properties:

Surface Temperature Emissivity


A1 fuselage side 60 0.8
A2 fuselage bottom 60 0.8
A3 fuselage side 60 0.8
A4 fuselage top 60 0.8
A5 front view 75 0.8
A6 75 0.8
B1 600 0.8
C1 500 spectral
◦C

Use Equation (D.4) to model the plume spectral emissivity with


(τs = 0, τp = 0.5, λc = 4.33 µm, Δλ = 0.45 µm, and s = 6).
The atmospheric transmittance at a range of 1000 m is given by
τ1000 = τ1λ (1 − τ2λ ), where the values τ1λ and τ2λ are calculated
using Equation (D.4). The τ1λ parameters are: (τs = 0, τp = 1,
λc = 4.33 µm, Δλ = 2 µm, and s = 6). The τ2λ parameters are:
(τs = 0, τp = 1, λc = 4.33 µm, Δλ = 0.35 µm, and s = 6).
Aspect angle is measured in the horizontal plane from the nose; a
zero aspect angle is looking at the aircraft nose; an aspect angle of
π is looking at the aircraft tail.
Do all spectral calculations in the spectral range from 3 µm to 6 µm
with an increment of 0.05 µm.
You can ignore reflected sunlight and the blue sky.
All solid surfaces on the plume have the same radiometric proper-
ties.
306 Chapter 8

Transmittance / Emissivity
Atmospheric
transmittance over
1 km path length

0.5

Plume
emissivity

0
3 3.5 4 4.5 5 5.5 6
Wavelength [mm]

8.1.1 Calculate the transmittance from the above equation and confirm
that it agrees with the graph. Calculate and plot the attenuation
coefficient. Then calculate and plot the transmittance at the fol-
lowing ranges: 500 m, 1000 m, 2000 m, and 5000 m. [3]
8.1.2 Draw plots for each component (surface or plume), showing the
irradiance with the atmosphere present for all of the ranges on the
same plot. [3]
Calculate the total (integrated) irradiance received by a sensor
(from all of the surfaces facing the sensor) at all above ranges from
the aircraft, for aspect angles θ ∈ {0, π/2, π} rad, with no atmo-
sphere present. If more than one surface is visible, calculate and
show the contributions separately, and also the sum of all contri-
butions. [3]
8.1.3 Draw plots for each component (surface or plume), showing the
irradiance with the atmosphere not present, for all of the ranges
on the same plot. [3]
Calculate the total (integrated) irradiance received by a sensor
(from all of the surfaces facing the sensor) at all above ranges from
the aircraft, for aspect angles θ ∈ {0, π/2, π} rad, with the atmo-
sphere present. If more than one surface is visible, calculate and
show the contributions separately, and also the sum of all contri-
butions. [3]

8.2 An InSb-based sensor is pointed toward a spherical cloud in the


sky at a range of 1000 m. The cloud is illuminated by the sun,
from behind the sensor. Ignore any obscuration of the sunlight by
the sensor or the earth.
The infinitely-large cloud has a temperature of 10 ◦ C, and an emis-
sivity of 0.8.
The sensor has a FOV of 10−5 sr. The sensor FOV is completely
filled by the cloud. The sensor optical aperture diameter is 50 mm.
Optical Signatures 307

The sensor detector responsivity can be modeled by Equation (D.5)


with (λc = 6 µm, k = 20, a = 3.5, and n = 4.3).
Use Equation (D.4) to model the sensor filter transmittance with
(τs = 0, τp = 1, λc = 4 µm, Δλ = 1.5 µm, and s = 20).
1.6
Responsivity [A/W] / Transmittance

1.4
Detector
1.2 responsivity
1

0.8

0.6 Filter
transmittance
0.4

0.2

0
0 1 2 3 4 5 6
Wavelength [mm]

You may assume unity atmospheric transmittance and zero path


radiance.
The task is to calculate the total signature of the cloud as observed
by the sensor in the MWIR spectral band. The signature comprises
the reflected sunlight as well as the thermal self-exitance.

8.2.1 Write a mathematical formulation describing the detector current;


include flux transfer, detector response, etc. Describe all elements
in the model and provide the relevant numerical values for all
parameters. [3]
8.2.2 Apply the Golden Rules to the mathematical formulation. [3]
8.2.3 Build a computer model of the problem. Describe the structure
of the model and provide all numeric values (spectral and scalar).
Use a wavelength increment of 0.01 µm in the spectral range 0.3 to
6 µm. [4]
8.2.4 Compile graphs of the spectral irradiance from the sources on the
entrance aperture of the sensor. The graphs must show the two
sources separately, as well as the sum of the two, with no detec-
tor or filter weighting (i.e., in front of the sensor). Compile three
sets of graphs: (1) unfiltered irradiance (no detector or filter), (2)
detector-weighted irradiance (no filter), and (3) detector and fil-
ter weighted. Use the normalized detector response for spectral
weighting. [6]
8.2.5 Calculate the current through the detector when it is viewing the
cloud if no optical filter is present. [2]
308 Chapter 8

8.2.6 Calculate the current through the detector when it is viewing the
cloud with the optical filter present. [2]
8.2.7 Comment on the use of the 3–6-µm spectral band for observations
during day- and nighttime. What consideration should be made
when performing temperature measurement in this spectral band?
[2]

8.3 A thin circular disk has a temperature of 300 K on the one side
and a temperature of 1000 K on the other side. The emissivity on
both sides is the same with a value of 1. The disk diameter is 1 m.
Assume an atmospheric transmittance of unity. The disk is viewed
against an infinitely large background with a uniform spatial tem-
perature distribution. Consider two scenarios: a background tem-
perature of 270 K and a background temperature of 330 K.
A sensor is located at a range of 1000 m from the disk. The sensor
spectral response is unity in the 3–5-µm spectral band and zero
elsewhere. The sensor FOV is 2 mrad full-apex angle.
The objective with this investigation is to plot the target contrast
intensity from all view directions (similar to the graphs shown in
Figure 8.1) for both background temperatures.
Prior to calculation, try to visualize the spherical intensity, and
draw a spherical contrast intensity diagram in freehand.
Calculate and plot the polar contrast intensity of the disk against
the two backgrounds, in the xy, yz, and zx planes. [6]
Use the tools in pyradi 1 (or any other tool of your choice) to cal-
culate and plot the three-dimensional spherical contrast intensity
of the disk against the two backgrounds. [10]
8.4 Extend the one-dimensional wedge light trap described in Prob-
lem 3.16.2 to two dimensions, with the wedge lines running along
the x axis. The light trap has a temperature of 1000 K. Calculate
and plot the hemispherical radiance distribution of the light trap
(use pyradi 1 or any other tool). [10]
Chapter 9
Electro-Optical System Analysis

It doesn’t matter how beautiful your theory is,


it doesn’t matter how smart you are.
If it doesn’t agree with experiment, it’s wrong.
Richard P. Feynman

Introduction

In this chapter different electro-optical systems are briefly defined and an-
alyzed to demonstrate the application of the radiometric modeling tech-
niques developed in the early chapters of the book. In the work presented
here, the emphasis is on the radiometry and methodology, rather than on
the detail parameters. Some of the case studies may appear somewhat
contrived, but these are still useful hands-on training material. In any real
design, considerably more effort will be expended: the level of detail will
be deeper, and the models much more comprehensive. One example of
a more comprehensive application is the simulation system described in
Appendix B.

9.1 Case Study: Flame Sensor

The flame sensor must detect the presence or absence of a flame in its FOV.
The sensor is pointed to an area just outside a furnace vent, against a clear-
sky background. The sensor must detect a change in signal indicating the
presence of a flame at the vent. This problem was defined to illustrate the
calculation of spectral integrals. This case study is also a worked example
in Section D.5.1 and on the pyradi toolkit website. 1
The sensor has an aperture area of 7.8 × 10−3 m2 and a FOV of 1 × 10−4
sr. The sensor filter spectral transmittance is shown in Figure 9.1(a). The
spectral transmittance can be calculated with Equation (D.4) and parame-
ters (τs = 0.0001, τp = 0.9, s = 12, Δλ = 0.8 µm, λc = 4.3 µm). The InSb

309
310 Chapter 9

(a) Detector, filter, atmospheric transmittance, and emissivity


1
Relative magnitude

Filter
Atmosphere
0.5
Detector
Emissivity

0
3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5
-3 2 1
x 10 (b) Path radiance out to space [W/(m ·sr·cm )]
5

4
Radiance

0
3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5
-6
x 10 (c) Irradiance from flame and path [W/(m2·cm 1)]
1.5

Flame irradiance
Irradiance

1.0

0.5 Path irradiance

0
3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5
Wavelength [mm]

Figure 9.1 Flame sensor spectral variables and results.

detector has a peak responsivity of 2.5 A/W and spectral response shown
in Figure 9.1(a). The preamplifier transimpedance is 1 × 104 V/A.
The flame area is 1 m2 . The flame temperature is 1000 ◦ C. The emissiv-
ity is 0.1 over most of the spectral band due to carbon particles in the flame.
At 4.3 µm there is a strong emissivity rise due to the hot CO2 in the flame;
see Figure 9.1(a). The emissivity can be calculated with Equation (D.4) and
parameters (τs = 0.1, τp = 0.7, s = 6, Δλ = 0.45 µm, λc = 4.33 µm).
The distance between the flame and the sensor is 1000 m. The at-
mosphere is similar to the Modtran™ Tropical climatic model. The path
is oriented such that the sensor stares out to space at a zenith angle of
88.8 deg. The components’ spectral transmittance is shown in Figure 9.1(a)
and the path radiance in Figure 9.1(b).
Electro-Optical System Analysis 311

Table 9.1 Summary results for flame sensor analysis.

Characteristic Value Unit


Path irradiance 0.054 mW/m2
Path voltage 10.4 mV
Flame irradiance 0.329 mW/m2
Flame voltage 64 mV
Flame + path irradiance 0.382 mW/m2
Flame + path voltage 75 mV

The spectral peak in the flame emissivity and the dip in atmospheric
transmittance are both centered around the 4.3-µm CO2 band. The strong
spectral variations in both target signature and atmospheric transmittance
necessitates multi-spectral calculation (see Section 2.6.5).
From Equation (6.16), the signal caused by the atmospheric path radi-
ance is given by
 ∞
 Zt Ω A 1
vpath = k R Lpathλ Sλ dλ, (9.1)
0

where S = R λ τ f λ . Note that the path radiance terms do not have an


atmospheric transmittance factor because the radiance is the net effect of
the atmosphere [see Equation (4.10)]. The signal caused by the flame is
given by
 Zt A 0 A 1  ∞
kR
vflame = 0λ L0λ τaλ Sλ dλ, (9.2)
R201 0

where the variables are defined as for Equation (6.16). A0 is the flame area
because cos θ0 = 1, and A1 is the sensor aperture area. These equations
were evaluated as described in Section D.5.1, and yielded the results shown
in Table 9.1.
It is clear that the flame signal is several times larger than the path
radiance signal, even though the flame only fills 1% of the sensor FOV.
The severity by which the atmosphere attenuates the CO2 exitance from
the flame is shown in Figure 9.1(c).

9.2 Case Study: Object Appearance in an Image

Section 7.6 developed the theory for observing a simple target against a
background through a radiant medium. This section describes the model
used to calculate practical values for this scenario. The section closes with a
312 Chapter 9

prediction of the meteorological range of the medium given the irradiance


contrast versus range of white and black targets.

An opaque target of fixed size is viewed against an infinitely large


opaque background at path lengths from short range to very long range.
The target and background are parallel flat surfaces located at the same
distance from the sensor and collectively fill the complete FOV. The target
and background surfaces are both facing the sensor, with near-horizontal
normal vector. The sun is located at zero zenith angle (i.e., vertically above
the target).
A variation of Equation (8.1) is used, ignoring transmitted background
flux and diffusely reflected sky and background radiance. The retained
terms are thermal self-exitance, reflected sunlight, and atmospheric path
radiance. The target and background BRDF, fr (θs , θv ) are assumed Lam-
bertian. For this analysis the pixel FOV observes a combination of target
radiance and background radiance, with the ratio between target and back-
ground varying with range:
target thermally emitted Lself T
  ∞  !
ES = Υω p tλ (θv ) Lλ ( Tt )τaλ Sλ dλ
0
background thermally emitted Lself B
  ∞  !
(1 − Υ ) ω p bλ (θv ) Lλ ( Tb )τaλ Sλ dλ
0
target reflected sun Lref sun T
  ∞  !
+ ψ cos θs Υω p (1 − tλ )sλ Lλ ( Ts )τsoλ τaλ Sλ dλ
0
background reflected sun Lref sun B
  ∞  !
+ ψ cos θs (1 − Υ)ω p (1 − bλ )sλ Lλ ( Ts )τsoλ τaλ Sλ dλ
0
atmospheric path radiance Lpath
   !

+ ωp Lpathλ Sλ dλ , (9.3)
0
where ES is the irradiance at the sensor entrance aperture in the pixel
FOV ω p , tλ is the target emissivity, Tt is the target temperature, bλ is
the background emissivity, and Tb is the background temperature. If the
target area is At and the distance between the sensor and the target is R,
then Υ = At /( R2 ω p ), and 0 ≤ Υ ≤ 1 is the fraction of the FOV that is
filled with the target. The remaining terms are defined in Table 8.1.
The first analysis using Equation (9.3) is to investigate the relative
contributions of the target, background radiance, and path irradiance to
Electro-Optical System Analysis 313

Hot black target in 0.4 0.75 mm band Cold black target in 0.4 0.75 mm band
10-3 10-4
Tt 300 K Total
Irradiance [W/m2]

t

Irradiance [W/m2]
-4 Tt 1773 K Total 0.9
10 t 0.9 -5 Tb 300
10
Tb 300 b 0 Background
10-5 b 0.05
Background Path
10-6
10-6 Target
Path Target
10-7 10-7
0.1 1 10 100 0.1 1 10 100
Range [km] Range [km]

Hot black target in 3.5 4.5 mm band Hot black target in 8 12 mm band
10-4 10-4
Total
Irradiance [W/m2]

Irradiance [W/m2]
10-5 Tt 500 K Background
t 0.9 Total 10-5
Tb 300 Tt 370 K
10-6 b 0.8 t 0.9
Background 10-6 Path Tb 300
10-7 Path b 0.9
Target
Target
-7
10-8 10
0.1 1 10 100 0.1 1 10 100
Range [km] Range [km]

Figure 9.2 Pixel irradiance as function of object-to-observer distance for varying target tem-
peratures and sensor spectral bands.

the total irradiance in the pixel. The analysis was done for the following
Modtran™ atmosphere: Tropical profiles, 23-km visibility Rural aerosol
(‘MIE Generated’ aerosol phase function), executed in the ‘Radiance with
Scattering’ mode, with multiple-scattering for flux at the observer. The
Isaac’s two-stream multiple-scattering algorithm is used. The observer is
located at sea level, viewing a slant path with 88-deg zenith angle (near
horizontal). The pixel FOV is 1 µsr, and the target area is 0.2 m2 . The target
and background temperature and emissivity properties are indicated on
the graphs in Figure 9.2. The code to calculate these graphs is included in
Section D.5.3.
From Figure 9.2 it is evident that the target determines the pixel irra-
diance for targets at close range, whereas the atmospheric path radiance
dominates the signature for targets at long range. The ‘cold-black’ target
in the visual band presents an interesting case in that at close target range
the pixel irradiance is low, because there is no sunlight reflection from the
target. At intermediate ranges the white reflective background dominates
the signature. At the risk of generalizing based on specifics, at least for
the test cases shown here, the atmospheric path radiance dominates from
10 km onwards. In the visual and LWIR spectral bands the path radiance
magnitude increase exceeds an order of magnitude over the ranges con-
sidered. In the MWIR spectral band, this increase is considerably less than
314 Chapter 9

10

1
Contrast

0.1

0.02
0.01

0.001
0.1 1 10 23.3 100
Range [km]

Figure 9.3 Contrast between target pixel and adjacent background pixel in the visual spec-
tral band.

an order of magnitude, indicating a more-favorable scenario.


In the second analysis, the meteorological (Koschmieder) range is cal-
culated from the pixel irradiance values. Now the background around the
target becomes the object of interest because the small area target has an in-
significant contribution beyond 10 km. The scenario shown in the top right
graph in Figure 9.2 is a black target against a white background. A similar
graph was also calculated for the target against a black background. Us-
ing Equation (4.14) and the two (black and white) background irradiance
graphs, the contrast shown in Figure 9.3 was calculated using
Ewhite background − Eblack background
CR = . (9.4)
Eblack background

Equation (9.4) indicates that the predicted meteorological range de-


pends on the magnitude of the incident solar irradiance Ewhite background .
Using the Koschmieder threshold contrast definition in Section 4.6.10, the
meteorological range was calculated as 23.3 km but for a very specific
background orientation. If the background surface is horizontal, fully fac-
ing the sun, cos θs = 1, and Ewhite background is large, leading to a very long
meteorological range. For a background surface tilted such that angle be-
tween the surface normal vector and sun direction is 84 deg (near-vertical
surface), cos θs = 0.1, and the desired result is obtained. Is this model ma-
nipulation reasonable? Consider the aviation definition of meteorological
range: the path should be near the horizontal, observing objects on the
horizon. Under this condition, the visible object surfaces will be closer to
the vertical than to the horizontal. Hence, the model should make provi-
sion for near-vertical surfaces and then yield reasonable results.
Electro-Optical System Analysis 315

Table 9.2 Solar-cell measurements.

Sun Lamp
Open-circuit voltage [V] 1.59 1.2
Short-circuit current [A] 0.107 0.06
Load line for fan
Load voltage [V] 0.9 0.54
Load current [A] 0.09 0.054

9.3 Case Study: Solar Cell Analysis

9.3.1 Observations

In a simple experiment, an inexpensive, toy silicon solar-cell panel was


used to power a small fan when illuminated by a lamp and the sun. The
panel had eight elements connected in an unknown network. The solar
cell had a fill factor of about 50%, estimated by inspection. The estimated
solar panel spectral responsivity is shown in Figure 9.4. The silicon solar
cells did not have an antireflection coating.
The Modtran™ Tropical atmosphere, with 30-deg zenith angle slant
path to space and 20-km visibility Urban aerosol attenuation, was used in
the analysis. The spectral transmittance is shown in Figure 9.4.
The panel was illuminated by an incandescent lamp and the sun. In
each case three load conditions were recorded: (1) the open-circuit voltage
(i.e., no load), (2) the short-circuit current (i.e., no voltage), and (3) the
fan load. For the two measurements, the atmospheric transmittance was
assumed spectrally constant at τlamp = 1 and τsun = 0.7. The recorded
values are shown in Table 9.2.
The panel was illuminated by a 60-W incandescent lamp (color tem-
perature of approximately 2650 K) at a perpendicular distance of 60 mm
from the center of the panel (Figure 9.4). The lamp filament emissivity in
the visual spectrum can be assumed to be 0.5. The lamp was housed in a
black painted fitting with a reasonable (but not perfect) matte finish. There
was a slight obscuration of the panel corners by the lamp housing.
316 Chapter 9

Light source
Rsc
0.8 Atmospheric
transmittance
θc

60 mm

Magnitude
0.6
Ac
104 mm 0.4

0.2
Solar cell
responsivity
0 [A/W]
0.2 0.4 0.6 0.8 1 1.2
90 mm Wavelength [mm]

Figure 9.4 Solar panel and lamp geometry, spectral atmospheric transmittance, and spec-
tral solar cell responsivity.

9.3.2 Analysis

9.3.2.1 Solid angles and source areas

The sun area is calculated by simple mathematics: Asun = πR2sun = 1.5 ×


1018 m2 . The solid angle of the solar panel as seen from the sun is approx-
imated by the area of the panel divided by the distance to the sun squared
because the panel area is small compared to the distance to the sun:

104 × 90 × 10−6
Ωpanel from sun = = 421 × 10−27 sr. (9.5)
(149 × 109 )2

The lamp is modeled as an isotropic point target with no cos θ weight-


ing. The solid angle of the solar panel, as seen from the lamp, is calculated
using the integral in Equation (2.13):
   3
dw dd H
Ωpanel from lamp = √
W D H2 w 2 + d2 + H 2
= 1.63 sr. (9.6)

The integral is determined over the size of the solar panel: −0.045 m ≤
h ≤ 0.045 m, and −0.052 m ≤ h ≤ 0.052 m. For more details on how to
calculate the integral, see Section D.5.8.
The lamp area is calculated with a little trick: the Stefan–Boltzmann
law (Section 3.1.3) states that the total exitance by a blackbody is given by
M ( T ) = 5.67 × 108 T 4 , but the total power is known to be 60 W (the lamp
rating). For this calculation, assume that the only energy loss is through
radiation with no heat loss due to convection or conduction through the
filaments’ stem wires. From Φ = M ( T ) Alamp , it follows that the total
Electro-Optical System Analysis 317

radiating surface area must be


60
Alamp = = 21 × 10−6 m2 , (9.7)
5.67 × 108 (2650)4
corresponding to a square area of about 4.6 mm by 4.6 mm, which seems
reasonable considering the length and coiled shape of the filament. How
robust is the lamp temperature assumption and its effect on the solution?
The exitance varies with temperature to the fourth power, so a variation in
temperature could have a significant effect in the result. If the lamp tem-
perature is 2850 K, the filament area drops only a little to 4 mm by 4 mm.
Thus, the result is reasonably robust against error in lamp-temperature
estimation.

9.3.2.2 Radiometry calculations

Following the workflow in Section 6.6 the solar cell current can be written
as follows:
 ∞  
 0λ L0λ τaλ Sλ dλdA0 dA1 cos θ0 cos θ1
i ph = ffill R
λ=0 source cell R201
 ∞  
 dA1 cosθ0 cos θ1
= ffill R 0λ L0λ τaλ Sλ dλ dA0 (9.8)
λ =0 source cell R201
= ffill Ieff A0 Ω1 ,

where i ph is the photon-induced current, ffill is the fill factor (how much
of the panel is active detector area), R is the detector responsivity scaling
factor with units [A/W], the spatial integrals and cosine terms are impor-
tant in the case of the lamp, A0 is the area of the source (sun or lamp), A1
is the total physical area of the solar panel, and S = Rλ τ f λ is the solar-cell
spectral responsivity.
The fill factor is the fraction of the panel that is able to convert optical
power to electrical power. In this case, the panel consisted of a number of
irregular pieces of silicon that filled only 50% of the area. This implies that
only half of the physical area can respond to the flux and create electricity.
The lamp illumination did not cover the full area of the source because
it was somewhat shielded by the lamp cover. It is estimated that 80% of
the solar cell was illuminated.
Note that the solid angle calculated above is for the whole panel. How-
ever, the panel consists of eight cells, of which some are in series. It is
shown below that the panel was wired as two parallel circuits, each of
four cells in series. It follows that, for the purpose of current generation,
318 Chapter 9

Table 9.3 Summary results for solar cell analysis.

Factor Lamp Sun Unit


Area illuminated 80% 100%
ffill 0.5 0.5
Ieff 1.84 × 104 1.64 × 106 A/(m2 ·sr)
A0 2.15 × 10−5 1.52 × 1018 m2
Ωc 1.62/4 4.22 × 10−25 /4 sr
Calculated current 0.064 0.131 A
Measured current 0.06 0.107 A
Difference 6.7% 22.4%

the effective area, and therefore the solid angle, is 0.25 of the totals calcu-
lated above. The exact factor depends on where each cell is located in the
panel, but for this calculation the value of 0.25 is used.
Using Equation (9.8), the calculated values shown in Table 9.3 are
somewhat higher than the measured values. The difference could be at-
tributed to errors in spectral response, fill factor, lamp temperature/area,
or atmospheric transmittance.
The solar cell efficiency is the ratio of flux converted to electricity to
the total incident flux:
∞
0λ L0λ τaλ Sλ dλ
η = λ=∞0 . (9.9)
λ=0 0λ L0λ τaλ dλ

Using Equation (9.9), the efficiency was calculated. In this analysis the
effect of atmospheric transmittance was ignored by setting τaλ = 1. The
value was found to be 1.4% for the lamp and 3.8% for the sun. It is evident
that a very small portion of the incident flux is converted to electrical en-
ergy. The Shockley–Queisser limit for single-layer silicon p-n junction solar
cells sets the theoretical limit at 33.7%. Current commercial and research
solar cells achieve between 22% and 25% efficiency.

9.3.2.3 Configuration

Under ideal conditions, the highest output voltage is less than 0.58 V per
single cell. Because the highest observed output voltage for this panel was
1.59 V, it is inferred that there must be four cells in series. In order to
obtain the best load and generation distribution, the cells are connected
as shown in Figure 9.5. Each cell acts as a current generator on its own.
Small differences in photocurrents lead to small differences in cell voltages,
Electro-Optical System Analysis 319

Iph
Rs

C
R

Figure 9.5 Solar panel circuit configuration.

which are equalized by voltage drop across the series resistors and cells.

9.3.2.4 Cell model

It is shown in Section 5.9.2 that the current generated by the solar cell is
related to the cell voltage by the I-V equation of the form [Equation (5.128)]:

Iload = Isat eqV/(kTβ) − 1 − Iph . (9.10)

Under short-circuit conditions it follows that


Isc = Isat e0 − 1 − Iph , (9.11)


and under open-circuit conditions

qVoc /( kTβ )
0 = Isat e − 1 − Iph . (9.12)

Combining Equations (9.11) and (9.12),



0 = Isat eqVoc /(kTβ) − 1 + Isc , (9.13)

and combining Equations (9.11) and (9.10),



Iload = Isat eqVload /(kTβ) − 1 + Isc . (9.14)

Then, by combining Equations (9.13) and (9.14),



− Iload = Isat eqVoc /(kTβ) − eqVload /(kTβ) . (9.15)

For β = 1 (ideal diffusion diode) it follows that q/(kTβ) = 38.65, so that


(from the sun short-circuit measurement) from Equation (9.13),
0.054
Isat = = 11 × 10−9 . (9.16)
(e38.6×0.397 − 1)
320 Chapter 9

0.01
Solar-cell current [A]

0.02 Lamp I V
Lamp load line
0.03

0.04
Sun load line Sun I V
0.05

0.06
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
Solar-cell voltage [V]

Figure 9.6 Solar cell I-V curve and load line for Isc = 0.03 and Isc = 0.054, lamp illumination,
and sun illumination.

From the measurements and above calculations, the load current for the
sun-illuminated cell is related to the cell voltage as

Iload = 11 × 10−9 e38.6×Vload − 1 − 0.054. (9.17)

Using Equation (9.17), the sun-induced load current predicted for a


cell voltage of 0.225 V is 0.045 A (per cell); this result is within 17% from
the measured value of 0.9 V and 0.09 A (for the whole panel).
By similar analysis, the load current for the lamp illuminated cell is
related to the cell voltage as

Iload = 276 × 10−9 e38.6×Vload − 1 − 0.03. (9.18)

Using Equation (9.18), the lamp-induced load current predicted for


a cell voltage of 0.135 V is 0.03 A (per cell). This result is within 7%
from the measured value of 1.2 V and 0.06 A (for  the whole panel). The
two calculated values for Isat were averaged by Isat sun ∗ Isat lamp = 55 ×
10−9 A. Then, for any illumination condition with a short-circuit current
of Isc , the solar cell’s load line is given by

Iload = 55 × 10−9 e38.6×Vload − 1 − Isc . (9.19)

Figure 9.6 shows the I-V curves for the two measurements described
in the problem statement (for a single cell). The solar cell is illuminated by
a lamp and the sun, while the same load is applied in both cases.
The analysis in this section indicates how a little information can be
used to derive a model. Once the model is developed, the remaining chal-
lenge is validation that the model is indeed correct (Section 7.9). To some
Electro-Optical System Analysis 321

extent this can be done with the information used to develop the model,
but better confidence requires additional information and analysis.

9.4 Case Study: Laser Rangefinder Range Equation

In this section the range equation for a laser rangefinder is derived. Be-
cause the radiometry techniques developed in this book do not cover co-
herent sources, an explanation is in order. In this rangefinder application
the laser is used as a source with very high radiance. Laser rangefind-
ers operate on the principle that light travels approximately 300 mm dis-
tance in one nanosecond. The time elapsed between the transmission of
the pulse and the reception of the pulse is used to determine the distance.
This specific analysis does not require the laser to be considered as a coher-
ent source, and hence the radiometry techniques in this book can be used
here. These techniques cannot be readily used for problems concerned
with spatial or temporal coherence.
In most laser rangefinders the transmitter and receiver are co-located
and co-axial on the same optical path. If a laser pulse is directed to an
object and reflected back from the object, the elapsed time between the
departure and arrival of the reflected light pulse is an indication of the
distance to the object. The objective with this analysis is to derive an ex-
pression for the SNR for the laser rangefinder. The SNR can then be used
to investigate the effect of several design parameters on system perfor-
mance. For another approach to the range equation for laser rangefinders,
see Kaminsky. 2

9.4.1 Noise equivalent irradiance

The noise equivalent irradiance (Section 6.7) in the receiver is given by



Δ f Ad
En = ∗
, (9.20)
D A1 τa
where Ad is the detector area, Δ f is the noise bandwidth in the receiver,
A1 is the receiver aperture area, and τa is the receiver filter transmittance.
The D ∗ values can include all of the relevant noise terms such as detector
noise, amplifier noise, background induced noise, and system noise. The
method whereby these noises are all combined into a single D ∗ is described
in Section 5.3.12. For the purposes of this chapter, we will only work with
a single D ∗ , assuming that all noise sources are incorporated in this value.
322 Chapter 9

Normal vector

θOR R
Sensor
ROR
AO θLO

RLO
L

FL Las
er

Figure 9.7 Laser rangefinder layout.

9.4.2 Signal irradiance

The geometrical relationship between the laser transmitter, the object, and
the laser receiver is shown in Figure 9.7. The laser power or flux is de-
noted by Φ L (in watts), the distance from the laser to the object is R LO ,
and the distance from the object to the receiver is ROR . The illuminated
object-surface normal vector makes an angle θ LO with the laser illumina-
tion direction, and an angle θOR with the receiver sightline direction.
Several simplifying assumptions are made in order to simplify the
problem and emphasize the methodology. The receiver and transmitter
fields are co-axial, and the receiver and transmitter are located at the same
position. The receiver and transmitter are coincident, hence the distance
from the object to the laser transmitter is equal to the distance from the
object to the laser receiver, and the same atmospheric transmittance applies
to both optical paths.
The laser beam radiance is calculated from the definition of radiance
in Equation (2.19), requiring the optical power, beam area at the source and
beam solid angle (divergence). In order to use this very simple equation,
the Gaussian shape properties of the laser beam is discarded for two very
simple uniform shapes. The laser beam angular radiance distribution is
assumed to be uniform within the top-hat-shaped beam divergence profile
(e.g., peak normalized divergence). The laser beam power distribution is
assumed to be uniform across the area of the beam (e.g., peak normalized
area). Using Equation (2.19) and these two simplifications, the radiance
Electro-Optical System Analysis 323

can be written as
ΦL
LL = , (9.21)
ΩL AL
where Ω L is the laser beam solid angle, and A L is the laser beam cross-
section area at the laser source. This simplification might not satisfy the
required mathematical rigor, but it does provide order-of-magnitude radi-
ance estimates.
From Equations (2.26) and (9.21), the irradiance on the object is then
L L A L τOL cos θ L cos θ LO
EO =
R2LO
Φ L τOL cos θ LO
= , (9.22)
Ω L R2LO
where it is assumed that cos θ L = 1 because the laser beam radiates per-
pendicularly from the laser mirror. The uncooperative target object can
have any orientation relative to the laser beam, denoted by θ LO .

9.4.3 Lambertian target reflectance

The laser pulse falling onto the object is reflected by the object. Most
natural surfaces have diffuse reflectance and scatter energy in all directions
(a Lambertian source). The reflected laser spot on the target object has a
radiance of
ρEO ρΦ L τLO cos θ LO
LO = = . (9.23)
π Ω L πR2LO

The irradiance, caused by the reflected pulse, at the laser receiver is


then, from Equation (2.26),
ΦR
ER =
dA1
LO AO cos θOR τOR
= 2
, (9.24)
ROR
where AO is the area illuminated by the laser that is visible to the sensor.
Further manipulation using Equation (9.23) leads to
Φ L τLO cos θ LO ρAO cos θOR τOR
ER =
Ω L πROR2 R2
LO
ρΦ L τLO cos θ LO AO cos θOR τOR
= . (9.25)
πΩ L R2LO ROR
2
324 Chapter 9

By co-locating the laser transmitter and the receiver (R2LO = ROR


2 = R,
τLO = τOR = τa , and cos θ LO = cos θOR ) the expression for irradiance
simplifies to

(ρ cos2 θ LO AO )Φ L τa2
ER = , (9.26)
πΩ L R4
where R is the distance between the object and the rangefinder. Equa-
tion (9.26) is similar to the radar range equation. The product ρ cos2 θ LO AO
can be regarded as the target optical cross-section. In the radar case, the
optical cross section has a fixed magnitude irrespective of distance between
the laser and target object. This is also true for a laser rangefinder illumi-
nating an airborne object where there is no reflective background. If the
object is observed against a terrain background, the terrain background
also contributes to the reflected signal (depending on the geometry).

9.4.4 Lambertian targets against the sky

The rangefinder irradiance SNR is the ratio of signal strength [Equation (9.26)]
to noise [Equation (9.20)], and is given by
ρΦ L τa2 cos2 θ LO AO
ER πΩ L R4
= √ (9.27)
En Δ f Ad
D ∗ A1 τa
ρΦ L τa2 cos2 θOR A0 D ∗ A1 τa
=  . (9.28)
πΩ L R4 Δ f Ad

If Bouger’s law is accepted for atmospheric transmittance, the trans-


mittance can be written in terms of distance as τa = e−γR . The laser flux
is given by Φ L ≈ Q L /t p , where Q L is the pulse energy in [J], and t p is
the pulse width in [s]. The required receiver electronic noise bandwidth
can be written in terms of the pulse width as Δ f = kn k f /t p , where kn re-
lates the electrical system electronic bandwidth with the noise equivalent
bandwidth, and k f relates the laser pulse width with the system electronic
bandwidth (see Section 5.3.13 for both definitions).
The irradiance SNR can now be written as
no control
  ⎞! 

design

⎛ ! 
distance
  !
ER A ρ cos2 θD ∗ ⎠ Q L A1 τa e−2γR
=⎝ 0   . (9.29)
En π k k n f
Ω L t p Ad R4

In Equation (9.29) there are three groups of variables:


Electro-Optical System Analysis 325

1. Variables and constants that the designer has no control over, such as
the object orientation and reflectivity, D ∗ , and constants.
2. Variables that the designer controls in the design process, such as laser
energy, receiver aperture area, laser pulse width, and detector size.
3. Distance-related factors that the designer has little control over.

The designer can now easily determine that increased laser energy and
receiver aperture improves the SNR linearly, whereas increased detector
area and pulse width decrease the SNR. Contrary to intuition, a longer
pulse width (i.e., a lower electronic bandwidth) decreases the SNR. Why?

9.4.5 Lambertian targets against terrain

If the laser rangefinder is viewing targets against the terrain, the laser
light is reflected from the target object as well as its surrounding terrain.
This implies that the real target area is not of sole importance because the
terrain background also reflects the laser pulse. There are two possibilities
regarding the laser transmitter and receiver beam or FOV sizes.
If the receiver FOV is larger than the transmitter beam width, the
receiver sees the whole laser spot. This implies that the effective laser spot
area is defined by the laser beam width by
AO cos θ LO
ΩL = , (9.30)
R2
and hence Equation (9.26) — the irradiance at the receiver — becomes
ρΦ L τa2 cos θ LO
ER = . (9.31)
πR2
If the receiver FOV is smaller than the transmitter beam width, the
receiver only views a portion of the whole laser spot. This implies that the
effective laser spot area is determined by the receiver FOV as
AO cos θ LO
ΩR = , (9.32)
R2
and hence Equation (9.26) becomes
ρΦ L τa2 cos θ LO
ER = Υ, (9.33)
πR2
where Υ = Ω R /Ω L is the fraction of the laser spot viewed by the laser
receiver FOV, and 0 ≤ Υ ≤ 1. When comparing Equations (9.31) and (9.33),
we see that they are the same, except for the fraction Υ. Equation (9.33)
is the more general case because Equation (9.31) is a special case when
Υ = 1.
326 Chapter 9

Table 9.4 Parameters used in rangefinder example.

Parameter Value Units Parameter Value Units


τa 0.5 ρ 0.1
A1 2 × 10−3 m2 cos θ 0.5
kn 1 Υ 1
kf 1 λ 1.06 µm
Ω 1 × 10−3 sr QL 0.06 J
f /# 1.4 Φ 4 MW
f 0.07 m tp 15 ns

D∗ 3 × 1011 √ dλ 0.02 µm
cm· HzW−1
Ad 4.6 × 10−6 m2

9.4.6 Detection range

The equations in the previous section provide the signal strength obtained
from a laser transmitter at a laser receiver. An estimate of the detection
range can be obtained by solving the range equation
ER ( R )
SNR = , (9.34)
EN
where SNR is the signal-to-noise ratio required to achieve detection. Solv-
ing the detection range problem means finding a value for range R in
Equation (9.29) that would yield the required SNR.

9.4.7 Example calculation

Equation (9.29) is known as the laser rangefinder range equation because


the range may be solved for a given set of design choices and SNR. One
such solution is shown here. The values shown in Table 9.4 were used in
the calculation.
Equation (9.29) was used to calculate the SNR versus range for sev-
eral atmospheric conditions. The atmospheric attenuation coefficients used
here 3 were γ = 0.17, 0.33, 0.53, and 0.88, corresponding to meteorological
ranges of 15 km, 8 km, 5 km, and 3 km at the laser wavelength of 1.06 µm.
A graph indicating SNR versus range is shown in Figure 9.8. Note how
strongly the atmospheric attenuation influences the range performance of
the rangefinder. Figure 9.8 also shows the expected operational range as a
function of detector D ∗ .
The background flux in the scene determines the detector current,
Electro-Optical System Analysis 327

Table 9.5 Background radiance 2 at 1.06 µm, expressed as a detector D ∗ and resultant
operating range.

Terrain Background Background Effective Range


Radiance D∗ D∗
Dark night 0 ∞ 3 × 1011 6.64
Grass terrain 10 6 × 1011 2.7 × 1011 6.32
Snow terrain 100 2 × 1011 1.67 × 1011 5.79
Blue sky 10 6 × 1011 2.7 × 1011 6.32
Dark clouds 8 7 × 1011 2.75 × 1011 6.34
Sunlit clouds 100 1.7 × 1011 1.48 × 1011 5.62
√ √
W/(m2 ·sr·µm) cm· Hz/W cm· Hz/W km

which in turn determines the noise in the detector. If the sensor is operat-
ing at night, the detector noise is at a minimum. If the sensor is pointed
at a bright, sunlit background, the current and hence the noise in the de-
tector increases relative to the dark night condition. By converting the
background flux noise to D ∗ , the range versus D ∗ graph can be used to
predict degradation in system performance under bright sunlight condi-
tions. Typical background radiance values are shown in Table 9.5. The
background radiance values were used to calculate the current in the de-
tector using a variation of Equation (6.16). Once the current in the detectors
were known, the shot noise at the respective currents were calculated, and
finally, new D ∗ values were calculated using Equation (5.32). These new
D ∗ values now represent the noise performance of the sensor under the
various background conditions. Once the new D ∗ values were known, the
range performance corresponding to the different conditions were deter-
mined from the bottom graph in Figure 9.8. The operating ranges are also
shown in Table 9.5.

9.4.8 Specular reflective surfaces

If the target has a specularly reflecting surface (see Figure 3.11), the sur-
face BRDF f r is used to calculate the reflection as a function of angle of
incidence and the view direction angle (see Figure 3.12). Then

fr (θ LO , θOR )Φ L τa2 cos θ LO Υ


ER = . (9.35)
R2
328 Chapter 9

100
Signal-to-noise ratio 15 km visibility

8 km visibility

10

5 km visibility

3 km visibility

1
0 1 2 3 4 5 6 7 8 9 10
Range [km]
(a)
10
9 Visibility 5 km
Signal to noise ratio 8
8 Total darkness
7 Grass terrain
Range [km]

6
5 Snow terrain
4
3
2
1
108 109 1010 1011 1012 1013
½
D* [cm·Hz /W]
(b)

Figure 9.8 Laser-rangefinder range equation analysis: (a) SNR versus range for several
atmospheric visibility values and (b) expected range as a function of detector D ∗ .

If the surface reflection can be modeled by the Phong equation [Equa-


tion (3.39), Figure 3.15], the irradiance at the receiver becomes
  
ρd ρs (n + 1) cosn α Φ L τa2 cos θ LO Υ
ER = + , (9.36)
π 2π cos θi R2
where the angles α and θ follow from the geometry given in Figure 3.12.
Note that in the case of a laser rangefinder, the transmitter and receiver
I and S vectors align on the
beams are co-axial, with the result that the 

same axis, and hence θ = θ and α = 2θ.
Using Equation (9.20) and the √ values Υ = 1, Δ f = 66 MHz, Ad =
4.6 × 10−6 m2 , D ∗ = 6 × 1011 cm· Hz/W, A1 = 2 × 10−3 m2 , and τr = 0.5,
a sensor noise level of 3 µW is calculated. Assume further that a SNR of
5 is required for operation. The operating range is then determined by
solving for R in
  
−6 ρdθ cos θ LO (n + 1)ρsθ cosn (2θ LO ) Φ L exp−2γR
15 × 10 = + ,
π 2π R2
(9.37)
Electro-Optical System Analysis 329

22
Aircraft white paint
21 Semi matte light grey paint
Operating range [km]

20 Matte white paint Matte blue paint


19 White chalk
Green leaf
18
17
16
15 Matte dark earth paint
14
13
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
View angle off-normal [rad]

Figure 9.9 Detection range for painted and natural surfaces

where the laser power Φ L = 4 MW, and γ = 1.5 × 10−3 [1/m].


In this configuration (co-axial receiver and transmitter) strong specu-
lar reflection toward the receiver can only occur when the surface normal
vector faces the rangefinder, i.e., θ LO = 0. By solving the equation for a
few of the materials shown in Figure 8.4, the detection ranges in Figure 9.9
were obtained.

1. In Figure 9.9, note the effect of surface properties on range, and in par-
ticular, range as a function of off-normal angle.
2. Except for the white paint anomaly, Figure 9.9 shows that specular sur-
faces support longer detection ranges along the mirror reflection vector
but lower detection ranges elsewhere. Lambertian surfaces, on the other
hand, support a near-constant detection range irrespective of view an-
gle. This is the infrared/optical equivalent of the radar geometric stealth
concept. The white paint anomaly requires that this statement be closely
investigated.
3. The sevenfold drop in diffuse reflectance between white chalk and matte
dark earth paint, 0.77 to 0.11, resulted in a detection-range ratio of
only 1.2. This indicates the relative robustness of the detection pro-
cess against paint variations. The ‘compression’ in detection range is
due to the 1/d2 term as well as the severe atmospheric attenuation at
longer ranges.
This compression effect will be less severe in a moderate atmosphere.
4. The fivefold drop between the specular peak and diffuse reflectance for
the specular paints only results in a detection range improvement of 1.2
times. The argument is the same as above.
330 Chapter 9

This compression effect will be less severe in a moderate atmosphere.


5. From these results, it would appear that extraordinary attempts to re-
duce the laser signature by utilizing specular properties or low reflectance
values are probably not worth the effort.

9.5 Case Study: Thermal Imaging Sensor Model

The sensor model described in Sections 6.5 and 6.7 is developed further to
predict the performance of a thermal imaging sensor. Much of the work
was already done in these two sections. Despite its simplicity, this model
can be used to predict thermal camera performance in trade-off studies.
Daniels 4 covers the same topic in more detail.
The sensor is regarded as an imaging system that builds up an image
by scanning the FOV with N detectors. The basic sensor configuration is
shown in Figure 6.13. The total number of detector elements are scanned
over the complete image by mechanical or other means. In some cases
the scanning may not be optimal and some time is lost due to the scan-
ning method. This ‘lost time’ is expressed in the scanning efficiency. The
scanning method is not considered here.

9.5.1 Electronic parameters

Nonstaring imaging sensors construct the image by scanning a number of


detectors to cover the complete FOV. This is normally done by means of
mechanical movement of a prism or a mirror. In practical scanners, me-
chanical or optical, limitations prevent a 100% effective scan; some portion
of the scan pattern can not be used for image formation. This effect is
incorporated in the scan-efficiency parameter.
Consider N detector elements scanning across the image plane, con-
tributing toward scanning the full image field. The number of electrical
signal samples per frame period required to form the image is given by
the total number of pixels in the image divided by the number of detector
elements. The image is formed at a frame rate of FF , hence the electronic
bandwidth required to pass the detector signal (for each detector element)
is defined in terms of the dwell time (time on target, integration time, or
pulse width) as
ωηs N
τe = , (9.38)
Ωr FF ηa ηb
where Ωr is the field of regard (the FOV covered by the sensor in one
frame), ω is the pixel FOV, and ηs is the scanning efficiency. ηs is less than
Electro-Optical System Analysis 331

unity because the scan velocity is not constant (ηs = v /vmax ) or there
is a portion of the scan that is not available to form the image. The scan
efficiency is effectively the ratio of useful scan period to total frame time.
The scan efficiency for a staring array sensor is one.
The image fill efficiency in the a and b directions ηa and ηb allows for
the situation where the detectors do not cover the total field of regard. In
other words, pixel centerline spacing exceeds the pixel size. This is fairly
commonplace in staring detectors, having fill factors lower than 100% fill.
ηa = ηb = 1 implies exact filling, ηa < 1 and ηb < 1 implies under-filling,
and ηa > 1 and ηb > 1 indicates overfilling.
The electronic bandwidth required to pass the signal is given by
kf k f Ωr FF ηa ηb
f −3 dB = = , (9.39)
τe ωηs N
where k f is the time-bandwidth product (see Section 5.3.14). The noise
equivalent bandwidth can be derived from Equation (9.39) as
kn k f Ωr FF ηa ηb
Δf = , (9.40)
ωηs N
where kn is the ratio of noise equivalent bandwidth to −3 dB bandwidth
(refer to Section 5.3.13).

9.5.2 Noise expressed as D ∗

All of the noise sources in the sensor can be combined into one single
number. It is convenient to express this number in terms of the detector
D ∗ , which can be derived from Equations (5.30) and (5.26).

9.5.3 Noise in the entrance aperture

The NEE in the sensor’s entrance aperture is given by



Δ f Ad
NEE S = ∗ A τ , (9.41)
ks Deff s s

where NEE S is the inband NEE, D ∗ is derived from Equations (5.30) and
(5.26), Δ f is the noise equivalent bandwidth, Ad is the area of the detector,
As is the area of the sensor’s entrance pupil, and τs is the effective trans-
mittance of the sensor. The optical PSF constant ks is the fraction of energy
from a point source falling onto a single detector element. It therefore rep-
resents the sensor’s capability to gather energy from a point source. It is
assumed that each detector element has uniform responsivity over its area.
332 Chapter 9

By mathematical manipulation the NEE is developed as



Δ f Ad
NEE = ∗ A τ (9.42)
ks Deff s s

ηa ηb kn k f Ωr FF ab 4( f /#)2
=  ∗ Pπ f 2 τ
(9.43)
ks ωηs N Deff s

ηa ηb kn k f Ωr FF ω 4( f /#)2
=  ∗ Pπτ
(9.44)
ks abηs N Deff s

ηa ηb kn k f Ωr FF 4( f /#)2
=  ∗ Pπ f τ
. (9.45)
ks ηs N Deff s

If a single-pole Butterworth filter is employed, the constant kn = π/2.


Assume that k f = π/2. The (conservative) equation for NEE then becomes

ηa ηb Ωr FF 2( f /#)2
NEE =  ∗ Pfτ
. (9.46)
ks ηs N Deff s

9.5.4 Noise in the object plane

Section 6.7 describes a mechanism for transforming noise to different planes


in the system. The noise can be referred to the object plane for extended
sources by noting that ks = 1, and
NEL A0 cos θ + 0 NEM ω
NEE = = NEL ω = . (9.47)
R 2 π
where the projected solid angle should be used for ω because the ther-
mal camera senses Lambertian sources. However, for the small pixel FOV
generally used, the projected and geometrical solid angles are numerically
equal.
Using Equation (9.45), the noise equivalent exitance (NEM) is there-
fore given by

π NEE ηa ηb kn k f Ωr FF 4( f /#)2
NEM = =  ∗ Pfτ
. (9.48)
ω αβ ηs N Deff s

The noise equivalent temperature difference (NETD) is the tempera-


ture in the source that causes the same signal as the noise in the sensor. It
is given by
NEM dM
=
NETD dT
Electro-Optical System Analysis 333

π NEE f 2
NETD =
ab dM
 dT
ηa ηb kn k f Ωr FF 4( f /#)2
=  ∗ P f τ dM
, (9.49)
αβ ηs N Deff s dT

where
 ∞
dM dMλ ( Tt )
= λ τaλ Sλ dλ (9.50)
dT 0 dT
is the derivative of the source exitance with respect to the source tem-
perature; in other words, the rate at which the source exitance changes
for a given change in source temperature. This derivative is required to
transform a small change in exitance into a corresponding small change in
temperature. It is required that the background temperature be specified
with the NETD value because the NETD value depends on the background
temperature against which it is measured. NETD is generally used to de-
scribe the sensitivity of thermal imaging systems. NETD is only defined
for extended sources, that is, sources that are larger than the sensor pixel
FOV. If the source is smaller than the FOV, any attempt to describe the
NETD is incorrect.
By simplification, the equation for noise equivalent temperature dif-
ference, Equation (9.49), can be reduced to the form of Equation (5.35) in
Lloyd’s classic book: 5

2 Δf
NETD =  . (9.51)
αβηD ∗ τs Ds dM
dT

9.5.5 Example calculation

Consider now the application of Equation (9.49) for the performance pre-
diction of a thermal camera. Two spectral bands are investigated: 3–5.5 µm
and 8–12 µm. In addition, the relative value of using a large number of de-
tector elements must be investigated. In particular, we investigate cameras
with N = 1, 256, and 256 × 256 detector elements. For the single-element
detector case it is assumed that the image is formed by a two-dimensional
scanner sweeping the single detector to form the complete image. For
the 256-element detector case it is assumed that the image is formed by a
one-dimensional scanner sweeping a vector of detectors to form the com-
plete image. For the 256 × 256 element detector case it is assumed that the
image is formed by a staring array sensor with no scanning. The design
parameters considered for this analysis are shown in Table 9.6.
334 Chapter 9

Table 9.6 Thermal camera analysis parameters.

Parameter Value Units Parameter Value Units


kn 1.15 f /# 1.2
kf 1 P 1
ηs 0.6/ 0.9/ 1 f 0.1956 m
ηa 1 ηb 1
pixels 256 × 256 a=b 40 µm
Ωr 3×3 deg ω 0.2 × 0.2 mrad2
FF 25 Hz τs 0.8
3–5 µm parameters


dM/dT ≈ 0.37 W/(m2 ·K) Deff 6 × 1010 cm· Hz/W
8–12 µm parameters


dM/dT ≈2 W/(m2 ·K) Deff 2 × 1010 cm· Hz/W

Table 9.7 Thermal camera performance for three detector configurations.

Detectors τe Δf NETD NETD


3–5 µm 8–12 µm
1 366 ns 3.14 MHz 1.44 K 0.80 K
256 141 µs 8.2 kHz 0.073 K 0.041 K
256 × 256 40 ms 28.7 Hz 0.0043K 0.0024 K

For some parameters three values are listed. In these cases they corre-
spond to the detector-element choices of 1, 256, and 256 × 256. Note that
not all of the values assumed above are realistic in practice. It is assumed
that the detector arrays will have perfect responsivity uniformity. This is
not practical or possible in real life. However, from a design comparison
perspective, these values are accepted. The electronic noise bandwidth,
detector dwell time, and NETD values for the three detector choices are
shown in Table 9.7.

9.6 Case Study: Atmosphere and Thermal Camera Sensitivity

One of the key performance parameters of a thermal imager is its sensi-


tivity expressed as the ‘noise equivalent temperature difference’ or NETD
[Equation (9.49)]. This parameter indicates the noise level of the imager ex-
pressed as a temperature contrast at the target. The NETD represents the
smallest temperature difference that can be measured by electronic instru-
Electro-Optical System Analysis 335

ments. The minimum resolvable temperature (MRT) by a human observer


is less than the NETD because the human eye and brain temporally inte-
grates and interprets the image.
In the evaluation of thermal cameras, the target signature is commonly
stated as a temperature difference with respect to the scene or background
temperature, e.g., ΔT = 2 K. When so specified, the sensitivity (NETD) of
the thermal imager depends on the scene temperature Tt [refer to Equa-
tion (9.50)]. The colder the temperature, the less sensitive the camera
becomes (higher NETD means poorer performance). Equation (9.50) al-
lows for atmospheric transmittance correction; this can be significant over
longer ranges. This section extends the strict notion of NETD as a lab-
oratory measurement at zero target distance (no atmosphere) to account
for atmospheric attenuation by inclusion of the atmospheric attenuation
τa . The combined use of Equations (9.49) and (9.50) allows the calculation
of the target signal required to overcome the atmospheric attenuation and
then to provide a signal equal to the noise. For convenience, call this the
‘noise equivalent target contrast’ (NETC).
As a practical application of the imaging sensor model, evaluate the
relative performance of a 3–5-µm imager versus an 8–12-µm imager when
effected by water vapor in the atmosphere. The effect of atmospheric
water-vapor attenuation is discussed in Section 4.6.8. This section eval-
uates the NETC performance for the following cases:

1. Practically available technology in the year 2000: staring array for 3–5-
µm imagers and linear scanned arrays for 8–12-µm imagers. All other
parameters are the same. The number of detector elements in the linear
vector is equal to the square root of the number of elements in the
staring array.

2. Practically available technology in the year 2012: Two identical imager


staring array configurations using the same optics, detector configura-
tion, and other design parameters. In this case, the performance of the
spectral band is tested in absolute terms.

Figure 9.10 shows the comparison for three levels of humidity (50%,
75%, and 95%), for atmospheric conditions ranging from −20 ◦ C to +50 ◦ C.
The background temperature is assumed to be the same as the atmospheric
temperature. Four distances are considered: 0 km, 2.5 km, 5 km, and
10 km. The noise equivalent target contrast is calculated for all of these
conditions; see Figure 9.10. The curves show the NETC versus scene/back-
ground temperature. Observe that the 8–12-µm imager performance de-
grades rapidly at higher temperatures and longer ranges. The NETC for a
Figure 9.10 Noise equivalent target contrast required for different 3–5-µm imagers and 8–
Chapter 9
Noise-equivalent target contrast for RH=50% Noise-equivalent target contrast for RH=75% Noise-equivalent target contrast for RH=95%
10 10 10
2000 Technology 2000 Technology 2000 Technology
3–5-mm staring array 3–5-mm staring array 3–5-mm staring array
Target temperature contrast [K]

Target temperature contrast [K]

Target temperature contrast [K]


8–12-mm linear vector 8–12-mm linear vector 8–12-mm linear vector

humidity
recorded
absolute
Highest
10 km 10 km 10 km
5 km 5 km 10 km 5 km
2.5 km 2.5 km 5 km 2.5 km 10 km 5 km
1 0 km 1 0 km 1 0 km
10 km
2.5 km
5 km
2.5 km
2.5 km

12-µm imagers. Higher NETC means poorer performance.


0 km 0 km 0 km
0,1 0,1 0,1
-20 -10 0 10 20 30 40 50 -20 -10 0 10 20 30 40 50 -20 -10 0 10 20 30 40 50
Atmospheric & background temperature [°C] Atmospheric & background temperature [°C] Atmospheric & background temperature [°C]

Noise-equivalent target contrast for RH=50% Noise-equivalent target contrast for RH=75% Noise-equivalent target contrast for RH=95%
1 1 1
10 km 10 km 10 km
5 km 10 km
5 km 5 km
2.5 km 10 km 5 km
Target temperature contrast [K]

Target temperature contrast [K]

Target temperature contrast [K]


0 km 2.5 km 2.5 km
0 km 0 km
5 km 2.5 km

2012 Technology 2012 Technology


2012 Technology 3–5-mm staring array 3–5-mm staring array
0.1 0.1 0.1 8–12-mm staring array
3–5-mm staring array 8–12-mm staring array
8–12-mm staring array

10 km 5 km
2.5 km

humidity
recorded
absolute
Highest
2.5 km

0 km 0 km 0 km
0.01 0.01 0.01
40
336

-20 -10 0 10 20 30 50 -20 -10 0 10 20 30 40 50 -20 -10 0 10 20 30 40 50


Atmospheric & background temperature [°C] Atmospheric & background temperature [°C] Atmospheric & background temperature [°C]
Electro-Optical System Analysis 337

3–5-µm imager increases with decreasing temperature and shows a slight


increase for higher relative humidities but not as severely as the 8–12-µm
imager.
The graphs in Figure 9.10 show, in the thick line, the cross-over tem-
perature where the two systems perform equally. Below this cross-over
temperature, the 8–12-µm thermal imager performs better and above the
cross-over; the 3–5-µm imager performs better.
The bottom graphs in Figure 9.10 indicate that, for the same detec-
tor/scanner configuration, the 8–12-µm imager outperforms the 3–5-µm
imager when observing objects in a cooler environment. The top graphs
compare the staring array 3–5-µm imager with the linear scanning 8–12-
µm imager. It is clear that the staring 3–5-µm imager performs on par with
the scanning 8–12-µm imager, at moderate and higher temperatures.
In the case of year-2000 technology, NETC compares equal for 8–12-
µm imagers and 3–5-µm imagers over a very broad band of moderate
climates, and benefits only occur at extreme climatic conditions.
In the case of year-2012 technology, the 8–12-µm imager outperforms
the 3–5-µm imager for low temperature and moderate climatic conditions,
whereas the 3–5-µm imager remains the imager or choice only for ex-
tremely high humidity conditions.

9.7 Case Study: Infrared Sensor Radiometry

9.7.1 Flux on the detector

The physical components normally found in an infrared sensor with a


cooled detector are shown in Figure 9.11. In sensors without cooling, the
cooler and associated components will not be present.
The detector element is mounted in a thermally insulated thermos
flask, called a dewar. The dewar serves to protect the detector, but also pro-
vides thermal insulation to maintain the detector temperature at some low
operating temperature. In order to prevent thermally generated charges in
the detector, the detector is cooled down by one of several different cooler
devices. 6,7 Different detector types operate at temperatures ranging from
several kelvin to 200 K, depending on the detector’s material type and
spectral range. The detector element is mounted on the front of a ‘cold
finger,’ the whole tip of which is at the cold temperature.
Shot noise in the detector depends on the photon-flux-induced current
in the detector. Lower noise can be achieved by reducing the background
338 Chapter 9

Flux from
dewar and/or
cold shield
Filtered flux
from optics 1
barrel plus 2
filter flux

Optics barrel
Flux from Thermally-insulating dewar
filter, optics, 3 Detector
and scene Cold finger

Liquid coolant
Cooler
Cold shield
Cold filter
Hot filter

Figure 9.11 Infrared sensor layout.

flux on the detector. In order to reduce the unwanted flux on the detector,
a cold shield 8–11 (cold screen or cold cone) is constructed around the detec-
tor, mounted on the cold finger (see Figure 5.21). Because this cold finger
and shield are at the same low temperature as the detector, the thermal
radiation from the cold shield is significantly less than the radiance from
the sensor components at room temperature.
The sensor may also employ an optically selective filter, mounted in
front of the detector. This filter only transmits flux in the spectral band
required by the sensor’s application. Flux outside the transmittance pass-
band is attenuated and never reaches the detector. Note, however, that the
filter also emits flux because the filter is a thermal radiator. Thus, in the
passband, the filter transmits flux from the scene, whereas in the stopbands
the filter radiates with emissivity  = α = 1 − τ − ρ, from Equation (2.3.4)
and Kirchhoff’s law (Section 3.2.1). In Figure 9.11 the filter is shown as the
front dewar window, but it can be located anywhere in the optical path.
The filter can be cooled down to reduce its radiated flux. If the filter is
mounted on the cold finger, it is called a ‘cold filter.’
Figure 9.11 depicts the different radiance zones in the sensor. To the
first approximation, the zones are rotationally symmetric even though the
figure depicts these as linear angles. Also, the zones are shown to emanate
from the center of the detector, but any real detector has a finite size with
slightly different zone shapes from every small part on the detector. The
cold shield geometry can become quite complex in a detailed analysis.
Electro-Optical System Analysis 339

In zone 1 the flux on the detector originates on the walls of the de-
war (sensor temperature) and/or cold shield walls (detector temperature).
Clearly, the design objective will be to increase the cold-shield solid angle
so as to decrease the radiation from the hot dewar walls and optics barrel
without reducing the signal flux.
In zone 2 the flux on the detector originates on the inside of the sen-
sor, i.e., the optical barrel and mounting rings. This flux could be ther-
mally emitted or external flux reflected from the barrel. The internal flux
is filtered by the hot/cold filter. If the filter is hot, the flux in the stop-
bands would be of the same magnitude as the internally self-emitted flux
(suppressing the reflected flux). If the filter is cold, the filter flux in the
stopbands can be small.
In zone 3 the flux on the detector is the sum of the scene flux, the
optics flux, and the filter flux. In the event of a sensor with hot optics and
filter, together with a cold scene, the scene flux can be considerably less
than the sensor fluxes.
A key strategy in improving a sensor’s sensitivity is therefore to cool
down the detector environment and filter to reduce the background flux
in zones 1 and 2. A cold filter will also reduce the filter radiated flux
L( T f )(1 − τ − ρ) in zone 3, which can be significant if the filter has a
narrow passband. Optics flux can be minimized by keeping the optics cool,
but more importantly, selecting materials with low emissivity in the sensor
spectral band. For a discussion of the effect of hot optics, see Section 9.8.2.
The radiometry in an imaging system is described in the camera equa-
tion, 12,13 which will be further developed in this section. The derivation
will cover the primary scene radiance as well as (some of) the radiance
sources in the sensor itself. In this analysis, the ideal thin-lens paraxial ap-
proximation is made. The linear angles in the sectional diagrams should
be viewed as rotationally symmetric solid angles, e.g., zone 3 in Figure 9.11
is a conical solid angle. It is also assumed that system throughput is the
same at all field angles, i.e., the on-axis marginal ray cone solid angle has
the same value as the off-axis marginal ray cone solid angle, as in Fig-
ure 9.12.

9.7.2 Focused optics

Figure 9.12 shows the primary flux sources in an imaging sensor. The
contributing source radiance values are the scene focused on the detector
(Lscene ), the optics and window (Loptics ), the filter (Lfilter ), the optics barrel
and inside of the sensor (Lbarrel ), and the detector cold shield (Lcold shield ).
340 Chapter 9

O N M
Pupil
Object plane Focal plane
Lbarrel Lfilter Lcold shield
Off axis light cone
Marginal ray
h α W2b W2a
Chief W1
Lscene ra
(R/cos y Loptics W3 dA¢
dA θ α) θ¢
Ad
Ld
On axis light cone α

s s¢

Figure 9.12 Radiometry in an imaging system.

The flux falling in on the detector is then given by


Ad Ao cos α cos ατoλ τ f λ Lsceneλ

Φdetλ = + A d Ω 3 τ f λ L opticsλ + L filterλ


(s / cos α)2

+ Ad Ω2b τ f λ Lbarrelλ + Ω2a Lbarrelλ + Ω1 Lcold shieldλ , (9.52)

where Ad is the detector area, Ao is the optics exit pupil area, α is the
off-axis angle to the object, τoλ is the optics transmittance, τ f λ is the fil-
ter transmittance, Lsceneλ is the scene radiance [Equation (8.1)], Lopticsλ is
the optics radiance, Lfilterλ is the filter radiance, Lbarrelλ is the optics barrel
radiance, and Lcold shieldλ is the detector cold shield radiance. The solid an-
gles Ω1 , Ω2a , Ω2b , and Ω3 are defined in Figure 9.12. From Equation (2.3),
oλ = 1 − ρoλ − τoλ , and  f λ = 1 − ρ f λ − τ f λ . Note that Ao /s = Ω3 =
π sin2 θ  . Equation (9.52) assumes that the optical system has no vignet-
ting or central obscuration. Analysis of any real system would require that
these factors be taken into account. Not shown in any of the derivations
here is the effect of stray light entering from outside of the ray cone. The
source can be outside the sensor or inside the sensor (e.g., hot rotating
parts). The stray light is normally associated by one or more reflections
from the optical barrel or even optical element surfaces. Stray light is of-
ten suppressed by appropriate baffle design, but some stray effects may
remain. Using Equation (2.12) for Ω3 , the detector flux is then

Φdetλ = Ad π sin2 θ  τoλ τ f λ Lsceneλ cos4 α


+ Ad π sin2 θ  τ f λ (1 − ρoλ − τoλ ) Lλ ( Toptics )
+ Ad π sin2 θ  (1 − ρ f λ − τ f λ ) Lλ ( Tfilter )
Electro-Optical System Analysis 341

+ Ad Ω2b τ f λ barrelλ Lλ ( Tbarrel )


+ Ad Ω2a barrelλ Lλ ( Tbarrel )
+ Ad Ω1 cold shieldλ Lλ ( Tcold shield ), (9.53)
where Lλ ( T ) is the spectral Planck-law radiation at temperature T, Toptics
is the optics (window and optical elements) temperature, Tfilter is the filter
temperature, Tbarrel is the optics barrel temperature, Tcold shield is the cold
shield temperature, barrelλ is the barrel spectral emissivity, cold shieldλ is
the cold shield spectral emissivity, and θ  is the field angle (maximum
inclination of the marginal ray). The value sin θ  is known as the numerical
aperture (N A) of the lens (see Section 6.3.3).
For the ideal lens (no aberrations, perfectly flat image and object
planes, and obedience to the Abbe sine condition), and at infinite con-
jugates sin θ  = N A = D/2 f = 1/2F# . The first term then becomes
Φdet scene λ = Ad π( N A)2 τoλ τ f λ Lsceneλ cos4 α
Ad π
= τoλ τ f λ Lsceneλ cos4 α. (9.54)
4F#2

If the object is not at infinity (finite conjugates), the optics image the
object onto the focal plane with a given magnification m. Combining the
simple lens equations, Equations (6.1), (6.2), and (6.3), with Equation (9.54),
it is found that s = f (1 + |m|). Applying that to the definition of θ it
follows that (for paraxial optics)
D 1
sin θ = = , (9.55)
2(1 + |m|) f 2F# (1 + |m|)
leading to the new formulation for flux on the detector:
πKC K N (α) Ad τoλ τ f λ Lsceneλ cos4 α
Φdet scene λ = , (9.56)
4F#2 (1 + |m|)2
where two new factors are introduced: KC accounts for central obscuration
(if present), and K N (α) accounts for vignetting. 13 Note that the angle under
consideration in the cos4 term is the object field angle, and not the cold
shield angle, which is a function of the numerical aperture (f -number cone)
of the cold shield. Unless the cold shield obscures the optical ray cone, the
cold shield numerical aperture does not come into consideration at all in
the cos4 effect.
The cold shield efficiency is the ratio of scene flux to total flux onto
the detector [Equations (9.52) and (9.56)],

Φdet scene λ dλ
ηcold shield =  . (9.57)
Φdetλ dλ
342 Chapter 9

Pupil Rearmost optical element


Cold shield
Vignetting
h’
Vignetted
optical aperture
Unobstructed Focal
optical aperture plane
Unobstructed
scene flux
Off-axis On-axis
Reduced marginal marginal
cold shield ray cone ray cone
efficiency

Figure 9.13 Reduced cold shield efficiency and vignetting in practical designs.

The objective of cold shield design is to achieve as high an efficiency as


possible. It is however very difficult to reach high efficiency values for fast
optics (low f -number) and large detectors. Ideally, the cold shield should
screen or shield all of the sensor internals behind the exit pupil; the cold
shield aperture should coincide with the exit pupil. If the cold shield
coincides with the exit pupil, it is called a cold stop. Practical cold shield
design requires the cold shield numerical aperture to be slightly larger
than the optics’ in order to allow for the off-axis imaging rays. Consider
the picture in Figure 9.13 where the cold shield numerical aperture exactly
equals the optics’ numerical aperture (i.e., the same marginal ray), but
the cold shield aperture is displaced from the exit pupil. The ray cone at
nonzero field angles will be partially vignetted (loss of scene flux), and in
addition, some internal sensor parts will be observed (reduced cold shield
efficiency). If the cold shield numerical aperture is increased to reduce
vignetting, the cold shield efficiency is also reduced.

9.7.3 Out-of-focus optics

In Figure 9.14, consider a small elemental area dAi in the object plane imag-
ing onto a small elemental area dA in the focal (image) plane. The conical
solid angle defined by the marginal rays (solid of revolution) contains all
of the flux flowing from dAi to dA . No flux outside this solid angle con-
tributes to the flux flow. Further, consider a small portion of the solid angle
as shown in the dark shaded area in the top figure of Figure 9.14. All of
the flux flowing from dAi to dA , passing through dAo in the plane Oo ,
Electro-Optical System Analysis 343

Oi Oo N M
Object Out-of-focus Pupil Focal
plane object plane plane
Does not contribute dA’ h’
to detector
flux dAo Contributes
to detector flux
Does not
contribute to
detector flux
h
dAi

Many small dAo dA’ h’


each with its own
solid angle

h
dAi

Solid angle between


the two cones

θ r dr

Rays between
the two cones
defines radiance-
field solid angle

Ring-shaped
source area
defines
radiance-field
area

Figure 9.14 Ideal optics; out-of-focus object radiance.


344 Chapter 9

has to flow along the shaded solid angle indicated in the figure. Any flux
flowing through the area dAo but outside the dark shaded solid angle will
not contribute to the flux flow between dAi and dA . This is shown more
explicitly in the bottom two pictures in Figure 9.14.
Suppose that an opaque source with uniform radiance is located in
plane Oo . The optics located in N are not concerned with where the flux
emanates, from the plane Oi or Oo . The optics dutifully focus the rays
according to their image forming design, along the same ray paths in both
cases. The focal-plane irradiance of an out-of-focus source dAo of uni-
form radiance provides exactly the same flux on the detector element as
would the in-focus source dAi with the same radiance. This observation
is a re-statement of the principle of radiance conservation: the location
of the source is not important, the spatial properties of the radiance field
determines the flux in the focal plane. The properties (including location)
of the source are important to create the radiance field; the field cannot
exist without the source in its precise location. However, once created, the
radiance field ‘carries’ through space with no further dependence on the
source.
Careful study of Figure 9.14 also indicates that radiance is defined by
a matched set of areas and solid angle directions. The small area dAi in
the object plane, with a full conical solid angle uniformly filled with rays,
provides the same flux on the detector as a large number of small areas in
the plane Oo but with narrow conical sections associated with each small
area.
The practical implication of this observation is that large, uniform
sources do not have to be ‘in focus.’ The requirement for the object location
at a particular plane of focus is only a requirement if the source radiance
is not uniform, and this pattern must be imaged accurately onto the focal
plane, i.e., if a sharp image is required.

9.8 Case Study: Bunsen Burner Flame Characterization

This case study provides an overview of a simple approach to flame sig-


nature characterization. The process shown here can be used as the basis
for a more-advanced analysis procedure. Not all of the data and results
are shown here; these are available on the pyradi website. 1 See also an
alternative investigation. 14
A laboratory bunsen burner was characterized with the objective to
determine its area, temperature, and emissivity (Section 8.4). The burner
was adjusted for no premix of air with the gas to obtain a very long, yellow
Electro-Optical System Analysis 345

  œ™ŽŒ›Š•  
–ŽŠœž›Ž–Ž— ŒŠ•’‹›Š’˜—
Š— ŠŠ
™›˜ŒŽœœ’—
  Š’˜–Ž›’Œ Š’Š—ŒŽ
–ŽŠœž›Ž–Ž— ŒŠ•’‹›Š’˜— œ™ŽŒ›ž–
›Š  ŠŠ

˜Ž• ¢—‘Žœ’œDZ
Š•Œž•ŠŽ Š›ŽŠǰ   œŠ–™•Ž Š•Œž•ŠŽ –’œœ’Ÿ’¢
Ž–™Ž›Šž›Ž œ™ŽŒ›Š• Žœ’–ŠŽ
Ž–™Ž›Šž›Ž Žœ’–ŠŽ Ž–’œœ’Ÿ’¢
Š— Ž–’œœ’Ÿ’¢

›’˜› ŸŠ•žŠŽ Š•Œž•ŠŽ Š•Œž•ŠŽ


”—˜ •ŽŽ ’–ŠŽ ’–ŠŽ Š›ŽŠ
Ž–™Ž›Šž›Ž Ž–™Ž›Šž›Ž

–ŠŽ Š’˜–Ž›’Œ Š’Š—ŒŽ


–ŽŠœž›Ž–Ž— ŒŠ•’‹›Š’˜— ’–ŠŽ
›Š  ŠŠ

–ŠŽ –ŠŽ›
–ŽŠœž›Ž–Ž— ŒŠ•’‹›Š’˜—
ŠŠ
Š— ™›˜ŒŽœœ’—

Figure 9.15 Flame data analysis workflow.

flame. At the exit of the burner nozzle, the flame is rich in butane/propane
and less rich in oxygen. The gas mixes with the air at the ‘outside’ and top
of the flame, burning away the rich gas concentration at the center of the
flame.

9.8.1 Data analysis workflow

The data analysis workflow is shown in Figure 9.15. The process starts
with the two sets (flame and reference) of measurements. In each case the
instrument calibration data is used to calculate radiance values (a radiance
spectrum and a radiance image). Because (1) the flame only partially fills
the FTIR spectrometer FOV, but (2) the calibration data applies to a fully
filled FOV, the spectral radiance measurement is ‘scaled’ with an unknown
factor.
The temperature and spectral emissivity jointly result in the observed
radiance values. The magnitudes of neither temperature nor spectral emis-
sivity are known. This procedure investigates sets of temperature and
emissivity values (Section 8.4) that would solve the measurement equa-
tion. One set — the one that best matches physical reality — is finally
346 Chapter 9

selected.
Starting from prior knowledge (open literature, past experience, or
physics principles), select a temperature and then calculate spectral emis-
sivity from the measured spectral radiance and Planck’s law:
Lmλ
λm = , (9.58)
Lbbλ ( Tm )
where Lmλ is the measured radiance, and Lbbλ ( Tm ) is the blackbody radi-
ance at the estimated temperature.
Use the radiance image obtained from the imaging instrument and the
spectral emissivity estimate mλ obtained above to calculate a temperature
image (for each pixel in the radiance image) by solving for temperature in
 ∞
Limage = Lbbλ ( Tm )λm τaλ Sλ dλ, (9.59)
0
where Limage is the wideband radiance value measured by the imaging
camera, λm is calculated above, τaλ is the spectral atmospheric transmit-
tance, and S λ is the imaging sensor spectral response. This process pro-
vides an image where each pixel in the image represents temperature. The
temperature map is critically evaluated to determine if the predicted tem-
peratures are acceptable (judgement call required). If the temperatures are
not acceptable, the FTIR sample temperature estimate is adjusted, and the
process is repeated.
This analysis assumes that the spectral emissivity will not vary sig-
nificantly for different flame temperatures — this assumption is only war-
ranted if the temperature spread is not too wide. This analysis also ignores
the effect of atmospheric transmission, i.e., τaλ = 1 in Equation (9.59), on
the assumption that the path length in the laboratory was short. However,
the atmospheric attenuation in the CO2 absorption band is severe, even
over these short distances.
The model developed here serves to convey the principles involved.
In practice, these principles will be used to develop a model that accounts
for the spatial variation across the area of the flame (texture). In some
cases the model might even account for temporal variations in the texture
and flame radiance.

9.8.2 Instrument calibration

The imaging radiometers and spectrometer were calibrated (or character-


ized) against laboratory blackbody sources. In most cases the calibration
is an elaborate process covering several instrument settings, filters, and en-
vironmental conditions, but it can be summarized as follows. Calibration
Electro-Optical System Analysis 347

18000
MWIR 120 ms 42.5 °C
16000
ND 2 filter
14000 500 ms 42 3 °C
20090908
Digital level

12000
10000 500 ms 16.0 °C
8000 120 ms 16.0 °C
6000
4000 30 ms 42.6 °C
2000 Operational
Calibration measurement 30 ms 16.0 °C
0
300 400 500 600 700 800 900 1000
Source temperature [K]
(a)

MWIR 120 ms 42.5 °C


10000 ND 2 filter 500 ms 42.3 °C
8000 20090908
Digital level

6000
5000
4000 500 ms 16 0 °C 120 ms 16.0 °C
3000 Hot optical elements,
filter and optical barrel Cold optics,
30 ms 42.6 °C
2000 theoretical curve
30 ms 16.0 °C
1 10 100
Source irradiance [mW/m2]
(b)
Figure 9.16 MWIR imaging radiometer calibration curves for (a) digital level vs. blackbody
temperature, and (b) digital level vs. irradiance.

entails measuring the instrument output (voltage or digital levels) versus


source temperature for a series of source temperatures (ideally over the full
dynamic range of the instrument). The set of measurements form a tem-
perature calibration curve (Figure 9.16). When using the instrument, the
curve is read ‘backward’ such that a given instrument voltage returns the
appropriate source temperature. If the test object’s emissivity is the same
as the laboratory source emissivity, the object’s temperature will be equal
to the source temperature. Section 8.5 elaborates in more detail on the
effect of the object surface emissivity and other flux contributions during
such a measurement.
Accurate measurement work requires that the spectral sensor response
[Equation (6.15)] must be known. In this case, the calibration source tem-
perature, together with the spectral sensor response, can be used to cal-
culate the inband source radiance [using Equation (6.17)]. A new curve is
now constructed to relate signal voltage with inband radiance for an ex-
tended source completely filling the pixels. This curve can now be used
to determine the radiance field incident on the instrument during a mea-
surement. This radiance value lends itself more readily to the analysis
348 Chapter 9

described in Section 6.6 and Chapter 8. It is also the basis for the analysis
described in this section.
Figure 9.16 shows typical calibration data. The top curve relates source
temperature and instrument voltage (digital level). The bottom curve re-
lates source irradiance on the sensor entrance aperture to instrument volt-
age. For an extended source, the source apparent radiance is related to ir-
radiance by E = Lω, where ω is the pixel FOV. In this case the instrument
was calibrated for three different gain settings (integration times of 30, 120,
and 500 µs) and two different sensor internal temperature conditions (16
and 42.6 ◦ C). Observe the importance of calibration at different internal
temperatures: the curves (for the same instrument and settings) show sig-
nificantly different responses. A good strategy to allow for temperature
changes in the sensor is to measure the calibration curve at several differ-
ent internal temperatures and then interpolate between these, according to
the actual instrument temperature during the measurement.
The effect of hot optics is also shown in Figure 9.16. It is evident
that the hot optics flux sets an asymptotically lower measurable flux limit
(called the ‘floor’). When measuring low-temperature test samples, a small
variation in internal temperature shifts the floor up or down, playing havoc
with calibration. In this particular instrument setting, an ND2 neutral
density filter (0.01 transmittance) was used. Much of the hot optics flux
emanates from this filter. So in all fairness, there is little point in using an
ND2 filter when measuring a low-temperature target. The situation does
arise, however, if there is a requirement to measure both hot and cold test
samples without changing filters.

9.8.3 Measurements

Measurements were made with imaging cameras operating in the 3–5-µm


MWIR and 7–11-µm LWIR spectral ranges, a Fourier transform infrared
(FTIR) spectrometer operating in the 2.5–5.5-µm spectral range, and with
a thermocouple.
The temperature in various locations in the flame was determined
with a thermocouple measurement. This thermocouple reading was found
to be very difficult because the flame temperatures appeared to vary quite
significantly, and the readings were considerably lower than expected.
As part of the measurement, a set of ‘reference’ measurements were
also made of laboratory blackbody sources at known temperatures. These
measurements serve to confirm instrument settings and calibration status
during subsequent measurements.
Electro-Optical System Analysis 349

3000
L(3.8 4.8 mm) 552.5 [W/(m2·sr)]
Radiance [W/(m2·sr·mm)]

2500

2000

1500

1000

500

0
2.5 3 3.5 4 4.5 5 5.5 6
Wavelength [mm]
(a)
0.3
Radiance [W/(m2·sr·cm-1)]

0.25

0.2

0.15
Measured reference
0.1
Calculated reference
0.05

0
2000 2200 2400 2600 2800 3000 3200 3400 3600
-1
Wavenumber [cm ]
(b)
Figure 9.17 Spectral radiance for (a) the Bunsen burner yellow flame and (b) the reference
source.

The FTIR instrument does not have the same fine spatial resolution
as the imaging instruments, and the measurement represents some form
of ‘average’ of the flame. The background temperature was much lower
than the flame temperature and was ignored in further analysis. The FTIR
spectrometer’s small FOV was pointed at the bottom of the flame — a
relatively cold part of the flame. Furthermore, the flame radiance was not
uniform in the instrument’s FOV. Because of the uncertainty in percentage
fill and flame nonuniformity, there is some uncertainty in the absolute
magnitude of the measurement. The radiance spectral shape was later
shown to be accurate (see below). If the measured results are accepted as a
scaled version of the full flame radiance, it is still useful because the shape
of the spectral emissivity can be extracted. The top graph in Figure 9.17
shows the FTIR spectral radiance measurement as well as the wideband
350 Chapter 9

integrated radiance in the MWIR band.


The bottom graph in Figure 9.17 shows the measured and calculated
spectral radiance of a 150 ◦ C blackbody reference source. Around 2350 cm−1
the atmospheric CO2 absorption is very high, even over very short path
lengths — as is visible in the graph. Around 3280 cm−1 an instrument
anomaly is visible. It is evident that, apart from two anomalous bands, the
two curves agree fairly well, thereby validating the instrument operation
and hence the measurement.

9.8.4 Imaging-camera radiance results

The images are initially recorded as voltages or digital levels, proportional


to the optical flux on the detector. The inverse of the calibration process
(Section 9.8.2) provides radiance images, where a pixel represents the scene
radiance (if an extended target fills the pixel). Using the spectral emissiv-
ity estimate, Equation (9.58), and the inverse form of Equation (9.59), the
temperature for the pixel can be calculated. Inverting Equation (9.59) can-
not be done analytically; a numerical solution must be found, such as a
lookup table between radiance and temperature.
The results from this processing are shown in Figure 9.18. The top left
graph shows the digital levels recorded by the instrument. The top right
graph shows the object radiance, as calculated from the digital levels and
the inverse calibration data. The images in Figure 9.18 demonstrate the
gas flow and radiance/temperature in the burner flame. Near the bottom,
the flame has a cool core of fresh gas supply with a higher radiance to-
ward the edges, clearly indicating lower combustion activity in the center.
Higher up, the air diffuses into the gas and provides oxygen to support
combustion, depleting the amount of unburnt gas toward the top of the
conical section.
As a confirmation check on the radiance levels, note that the total inte-
grated radiance measured by the FTIR is 552.5 W/m2 sr (Figure 9.17); this
is the sum of all values measured by the FTIR in the MWIR band. The
image data in Figure 9.18 (top right image) shows that the radiance lev-
els immediately above the nozzle are of the order of 500–600 W/m2 sr, in
good agreement with the FTIR radiance measurement. It appears that the
concern for magnitude uncertainty in the FTIR measurement was unwar-
ranted.
The two bottom graphs in Figure 9.18 should be read in conjunction:
the left graph shows the temperature associated with the emissivity in the
right graph. The critical assumption here is that the emissivity is the same
Electro-Optical System Analysis 351

Digital level Radiance [W/(m2·sr)]


10000
2000
8000
1500
6000
1000
4000
MWIR: 3.8–4.8 mm

500
2000

Temperature [K]
1
1800

Emissivity
1600
1400
1200
1000 0
3 4 5 6
800 Wavelength [mm]

Figure 9.18 Bunsen flame MWIR images in units of digital level, radiance, and temperature.

at all temperatures; a more sophisticated analysis will allow for emissivity


variation with temperature. It is also assumed that the emissivity is the
same for all pixels in the image; this, too, requires a more-accurate model
for advanced analysis.
Following the approach described in Section 8.4, various combina-
tions of emissivity and temperature were considered. The top graph in
Figure 9.19 shows the spectral emissivity and associated temperatures.
Clearly, the case for a temperature of 650 ◦ C is wrong because it requires
an emissivity greater than one. Any of the remaining combinations are
plausible but not necessarily physically feasible. If the flame temperature
is 850 ◦ C, the peak emissivity in the CO2 spectrum is around 0.6. If the
flame temperature is hotter, at 1250 ◦ C the peak emissivity in the CO2 spec-
trum is around 0.25. These combinations of temperature and emissivity all
give the same radiance values as measured. After consulting published
data, 15–18 it was decided to select the 825 ◦ C emissivity-temperature data
set, shown in the bottom graph in Figure 9.19.
Section 4.2.5 describes the effect of optical thickness in a gaseous flame
on the emissivity of the flame. Radiation measurements such as this can
only give an indication of the apparent temperature as derived from the
emissivity. The volumetric region with the highest emissivity will domi-
nate the apparent temperature of the flame. An optically thick flame (high
attenuation inside the flame) presents itself as a ‘surface’ radiator, and no
conclusion can be reached on the temperature deep inside the flame. In
this case, the flame is optically thin (emissivity less than unity), and the
352 Chapter 9

1
 for 650 °C
0.8
Estimated emissivity

 for 850 °C

 for 1050 °C
0.6
 for 1250 °C
0.4  for 1450 °C

0.2

0
2000 2200 2400 2600 2800 3000 3200 3400 3600
Wavenumber [cm 1]
(a)
0.6

0.5
Emissivity

0.4
0.3
0.2
0.1
0
2000 2200 2400 2600 2800 3000 3200 3400 3600
Wavenumber [cm 1]
(b)
Figure 9.19 (a) Bunsen burner estimated emissivity-temperature combinations. (b) Appar-
ent emissivity of the Bunsen burner yellow flame at 825 ◦ C.

measurement is an indication of the temperature along the full path but


dominated by the volumetric region with the highest emissivity.
At the bottom of the plume near the Bunsen burner nozzle, the tem-
perature is around 1100 K (826 ◦ C), whereas in the hottest region of the
flame the temperatures are 1900 K (1626 ◦ C). These values agree well with
published information. 15–18 The LWIR images were processed similarly to
the MWIR images described above. The spectral emissivity was assumed
to be constant over the LWIR band and found to be very low, less than 1%.

9.8.5 Imaging-camera flame-area results

The flame area is calculated from the radiance image using the techniques
described in Section 8.4.2. The flame area was determined for nine radi-
ance threshold values, ranging from just above background to 95% of the
Electro-Optical System Analysis 353

10-2
Flame size [m2]

Threshold segmentation only


10-3

Threshold segmentation and peak normalized


10-4

10-5
0 500 1000 1500 2000 2500
2
Threshold [W/(m ·sr)]
(a)
10-2

Threshold segmentation only


Flame size [m2]

10-3

Threshold segmentation and peak normalized


10-4

10-5
40 60 80 100 120 140
Threshold [W/(m2·sr)]
(b)
Figure 9.20 Bunsen burner yellow flame predicted area as function of threshold for
(a) MWIR, and (b) LWIR.

maximum radiance. An analysis similar to the one depicted in Figure 8.5


was performed. The flame areas thus obtained from the MWIR and LWIR
images are shown in Figure 9.20.

9.8.6 Flame dynamics

The Bunsen yellow flame has significant turbulence as the air mixes with
the flame, consuming oxygen and expanding the hot gas plume. The forces
resulting from internal mass flow acting on this plume cause it to become
turbulent, resulting in the ‘dancing’ of the flame. The flame shape varied
considerably in time as the turbulence contorted the flame. Figure 9.21
shows subsequent images in a measurement series; the interval between
354 Chapter 9

Figure 9.21 Bunsen flame sequence with 60-ms intervals between frames.

successive frames was 60 ms. The flame shapes vary considerably between
subsequent frames. The turbulence bandwidth for this flame is higher
than 20 Hz. There is little correlation between the ‘hot spots’ in subse-
quent frames. The hot spots indicate the spatial locations of combustion,
and these locations change very quickly. The flame shapes show huge
differences, but the flame volume (size) appears to be similar between all
of the frames. The flame area is therefore expected to show only a small
variation in time.

9.8.7 Thermocouple flame temperature results

The measurement of flame temperatures with a thermocouple 19 proves to


be quite difficult. 20,21 Thermocouple readings are for the thermocouple it-
self, not necessarily for the flame. The indicated temperature varies greatly
Electro-Optical System Analysis 355

from moment to moment. More importantly, the temperature reads lower


than the expected adiabatic temperature due to ‘thermal loading’ — the
thermocouple causes heat flow out of the (solid, liquid, or gas) test sam-
ple. Under thermal loading, the thermocouple never reaches equilibrium
with the heat source, and the indicated temperature seems too low.
Thermocouple temperature measurements can be improved by reduc-
ing the thermal loading by using smaller thermocouple devices, using thin-
ner wires, and by waiting for thermal equilibrium. In this experiment,
thermal equilibrium was not achieved, and hence no conclusive result was
obtained.

Bibliography
[1] Pyradi team, “Pyradi Radiometry Python Toolkit,” http://code.
google.com/p/pyradi.

[2] Kaminski, W. R., “Range calculations for IR rangefinder and designa-


tors,” Proc. SPIE 227, 65–79 (1980) [doi: 10.1117/12.958748].

[3] RCA Corporation, RCA Electro-Optics Handbook, no. 11 in EOH, Burle


(1974).

[4] Daniels, A., Infrared Systems, Detectors and FPAs, 2 Ed., SPIE Press
(2010).

[5] Lloyd, J. M., Thermal Imaging Systems, Plenum Press, New York (1975).

[6] Rogatto, W. D., Ed., The Infrared and Electro-Optical Systems Handbook:
Electro-Optical Components, Vol. 3, ERIM and SPIE Press, Bellingham,
WA (1993).

[7] Rogalski, A., Infrared Detectors, 2nd Ed., CRC Press, Boca Raton, FL
(2011).

[8] Campana, S. B., Ed., The Infrared and Electro-Optical Systems Handbook:
Passive Electro-Optical Systems , Vol. 5, ERIM and SPIE Press, Belling-
ham, WA (1993).

[9] Wolfe, W. L. and Zissis, G., The Infrared Handbook, Office of Naval
Research, US Navy, Infrared Information and Analysis Center, Envi-
ronmental Research Institute of Michigan (1978).

[10] Dereniak, E. L. and Boreman, G. D., Infrared Detectors and Systems,


John Wiley & Sons, New York (1996).
356 Chapter 9

[11] Holst, G., Electro-Optical Imaging System Performance, 5th Ed., JCD
Publishing, Winter Park, FL (2008).

[12] Palmer, J. M. and Grant, B. G., The Art of Radiometry, SPIE Press,
Bellingham, WA (2009) [doi: 10.1117/3.798237].

[13] Slater, P., Remote Sensing: Optics and Optical Systems , Addison-Wesley,
Boston, MA (1980).

[14] Strojnik, M., Paez, G., and Granados, J. C., “Flame thermometry,”
Proc. SPIE 6307, 63070L (2006) [doi: 10.1117/12.674938].

[15] Haber, L. C., An investigation into the origin, measurement and application
of chemiluminescent light emissions from premixed flames, Master’s thesis,
Virginia Polytechnic Institute and State University (2000).

[16] Flame temperatures, http://www.derose.net/steve/resources/


engtables/flametemp.html.

[17] How Hot is a Bunsen Burner Flame?, http://www.


avogadro-lab-supply.com/content.php?content_id=1003.

[18] Wikipedia, “Flames,” http://en.wikipedia.org/wiki/Flame.

[19] Wikipedia, “Thermocouple,” en.wikipedia.org/wiki/Thermocouple.

[20] Yildirim, Z., Self-defense of large aircraft, Master’s thesis, Naval Post-
graduate School (2008).

[21] Okamoto, N., “Overview of Temperature Measurement,” http://www.


engr.sjsu.edu/ndejong/ME_146.htm.

[22] Pyradi team, “Pyradi data,” https://code.google.com/p/pyradi/


source/browse.

Problems

9.1 The purpose of this study is to verify the proposed FOV and to
optimize the design to achieve maximum SNR. The data for this
problem is given in the DP01.zip data file on the pyradi web-
site. 22
A flame sensor has an aperture area of 0.005 m2 and a proposed
FOV of 10−5 sr. The InSb detector has a peak responsivity of 2.5
A/W and normalized spectral response defined in the data file
(detectorNormalized), shown in Figure 9.22. The sensor filter
Electro-Optical System Analysis 357

is defined in Equation (D.4) with τs = 0.0001, τp = 0.9, λc =


4.3 µm, Δλ = 0.8 µm, and s = 12.
The current flowing through the  detector causes noise. The rms
detector noise is given by in = 2qid Δ f , where q = 1.6 × 10−19 C
is the charge on an electron, id is the DC current through the detec-
tor, and Δ f is the noise equivalent bandwidth of the sensor elec-
tronics. For this problem, when calculating noise, ignore dark cur-
rent through the detector; consider only the flux-induced current
through the detector. The sensor’s noise equivalent bandwidth Δ f
is 107 Hz.
The signal is defined as the difference between two samples: one
sample with the flame (and sky) in the FOV, and one sample with-
out the flame (only sky) in the FOV.
The system engineer requires a SNR of 8 to guarantee system per-
formance.
The flame area is 0.1 m2 , and the flame temperature is 1500 ◦ C. The
flame emissivity is 0.1 over most of the spectral band due to carbon
particles in the flame. At 4.3 µm there is a strong emissivity rise
due to the hot CO2 in the flame. The flame emissivity is defined in
Equation (D.4) with the following parameters: τs = 0.1, τp = 0.7,
λc = 4.33, Δλ = 0.45, and s = 6.
The distance between the flame and the sensor is 12 km. The
atmospheric transmittance and sky radiance data files supplied
with this problem are for the Modtran™ Tropical climatic model.
The path is oriented such that the sensor stares out to space at a
zenith angle of 88 deg. The spectral transmittance is portrayed in
Figure 9.22. The sky radiance file (skyRadiance) describes the
sky/path radiance in units of [W/(cm2 ·sr)], convert as appropri-
ate. The transmittance file (tau12km) provides transmittance at
the stated zenith angle for a range of 12 km.

9.1.1 Describe the possible means whereby the SNR can be optimized
for this system. [3]
9.1.2 Use the data in the DP01.zip data file, or use Modtran™ to cal-
culate the transmittance and path radiance. Use the path geometry
as defined above and confirm that the calculated values agree with
the graphs shown here. [4]
9.1.3 Compile a mathematical formulation for the signal, the noise, and
the SNR. This formulation will be used to evaluate your optimiz-
ing strategies, so it must be a complete description of the system
with all parameters affecting system performance. [5]
358 Chapter 9

Relative magnitude 0.8


Detector (normalized)
Filter
0.6
Atmosphere (12 km)
Flame emissivity
0.4

0.2

0
2.5 3 3.5 4 4.5 5 5.5 6 6.5
Wavelength [mm]
(a)
0.02
Radiance [W/(m2·sr·cm )]

Sky radiance
-1

0.015

0.01

0.005

0
2.5 3 3.5 4 4.5 5 5.5 6 6.5
Wavelength [mm]
(b)
Figure 9.22 (a) Spectral data for the detector, filter, atmosphere, and flame emissivity.
(b) Path radiance spectral data.

Apply the Golden Rules to the mathematical formulation derived


here. [5]
9.1.4 Write a numerical implementation of the mathematical model. Plot
the spectral parameters and sky radiance, and verify that you get
the same answers as in the graphs. [10]
9.1.5 Calculate the detector currents for sensor FOV values Ω of 1 × 10−7 ,
1 × 10−6 , 1 × 10−5 , 1 × 10−4 , and 1 × 10−3 sr. [2]
Plot (1) the detector current with the flame in the FOV and (2) the
detector current with the flame not in the FOV, versus FOV. Plot
on log-log graphs; both the x and y axes must plot in log scale.
Verify that you get curves of the form shown below. Explain the
shape of the two curves. [4]
Electro-Optical System Analysis 359

Sensor current as function of field of view

Sensor current [A]

Pixel with flame

Pixel without flame

Field of view [sr]

9.1.6 Calculate the SNR for the following sensor fields of view: 1 × 10−7 ,
1 × 10−6 , 1 × 10−5 , 1 × 10−4 , and 1 × 10−3 sr. [2]
Plot the SNR versus FOV. Plot on log-log graphs and verify that
you get a curve of the form shown below. Explain the shape of the
curve. [2]
Signal to noise ratio: contrast current / noise
100

10
SNR

0.1
10 7 10 6 10 5 10 4 10 3
Field of view [sr]

From this curve, determine whether a FOV of 1 × 10−5 sr will pro-


vide the necessary SNR, as specified by the system engineer. [1]
9.1.7 Calculate the SNR for each combination point on a two-dimension-
al grid of various spectral widths, starting at 0.1 µm and going to
0.8 µm in increments of 0.1 µm and fields of view of 1 × 10−7 ,
1 × 10−6 , 1 × 10−5 , 1 × 10−4 , and 1 × 10−3 sr. Plot the SNR in a
graph similar to the graph below. [4]
Assuming the graph below to be correct, comment on the optimal
FOV and filter spectral width for the sensor. [2]
Signal to noise ratio as function of spectral filter width
25

20 SNR for 0.1 msr


SNR

15

10 SNR for 1 msr

5 SNR for 10 msr


0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
Spectral Filter Width [mm]

9.2 The data for this problem is given in the DP02.zip data file on
the pyradi website. 22
360 Chapter 9

A multi-color sensor operates in three bands: the visual band,


the 3–5-µm band, and the 8–12-µm band. The sensor observes
three infinitely large targets at a range of 2 km, with tempera-
tures T ∈ {300, 1000, 6000} K, with emissivity  = 1, through three
different atmospheres. The purpose with this investigation is to
study the target signatures in the different bands, and to calculate
the detector signals in each case.
The atmospheric transmittance and path radiance are supplied as
Modtran™ files, in electronic format. The Modtran™ results
were calculated for a uniform path with length of 2 km and can be
used as supplied. The Modtran™ tape5, tape6, and tape7
files contain the input data, user readable output data, and tabular
data respectively. The files are as follows:
File Description Notes
atmo1 23-km visibility (Rural), Moderate conditions
5 g/m3 H2 O
atmo2 2-km visibility (Urban), Poor visibility
5 g/m3 H2 O
atmo3 23-km visibility (Rural), High humidity
39 g/m3 H2 O

In the tape7 files, wavenumber is given in the ‘FREQ’ column,


transmittance in the ‘TOT TRANS’ column, path radiance in the
‘TOTAL RAD’ column, and −γR is given in the ‘DEPTH’ column.
Note that the path radiance is given in units of [W/(cm2 ·sr·cm−1 )].
The sensor has a FOV of 10−6 sr. The sensor FOV is completely
filled by the target. The sensor optical aperture diameter is 100 mm.
In this analysis you only have to consider the target flux and path
radiance between the target and sensor — ignore background flux
and flux inside the sensor.
The detector spectral response is given by Equation (D.5) with the
following values:

Band λc n a k
Silicon 1.20 4.30 3.50 8
3–6 µm 6.00 4.30 3.50 30
8–12 µm 12.00 4.30 3.50 60
An optical filter is used to limit the spectral width. The filter re-
sponse is given by Equation (D.4) with the following values:
Electro-Optical System Analysis 361

Band τs τp s Δλ λc
Silicon 0.001 0.90 20 0.20 0.55
3–6 µm 0.001 0.90 12 1.20 4.20
8–12 µm 0.001 0.90 6 2.50 10.00

9.2.1 Draw a picture of the system. Write a mathematical formulation


describing the detector current; include flux transfer, detector re-
sponse, etc. Describe all elements in the model and provide the
relevant numerical values for all parameters. [6]
9.2.2 Apply the Golden Rules to the mathematical formulation given
above. [4]
9.2.3 Write a numerical implementation of the problem in a computer
language. Describe the structure of the model and provide all
numeric values (spectral and scalar). Use at least 100 samples in
each spectral band at constant wavenumber increments. [6]
Confirm the accuracy of your filter and detector implementations
by plotting the spectral values. Confirm the correct read-in and
processing of your atmospheric results by plotting the spectral at-
mospheric transmittance and path radiance. [4]
9.2.4 Compile a series of graphs of the spectral irradiance showing (1)
the unfiltered path irradiance, (2) the targets’ unfiltered thermal
irradiance, and (3) the filtered sum of path plus target irradiance
on the entrance aperture of the sensor (i.e., filter times the sum of
the first two components). The graph must show the irradiance
data vs. wavelength, with no detector weighting, for all combi-
nations of sensors, atmospheres, and targets (27 graphs in total).
The graphs should resemble those shown below but for all three
spectral bands. ‘TotΔλ’ is the total irradiance passed by the filter.
‘Path’ is the path radiance. ‘τa Eθ ’ is the source thermal irradiance
observed through the atmosphere. [6]
362 Chapter 9

T = 300 K, 23 km Rural T = 1000 K, 23 km Rural T = 6000 K, 23 km Rural


Irradiance [W/(m2·mm)]
10-6 10-4
taEq
10-4
10-8 10-6 TotDl
Path
10-6
5 g/m3 H2O 10 -8 5 g/m3 H2O 5 g/m3 H2O
10-10
8 9 10 11 12 8 9 10 11 12 8 9 10 11 12
T = 300 K, 2 km Urban T = 1000 K, 2 km Urban T = 6000 K, 2 km Urban
Irradiance [W/(m2·mm)]

-4
10
10-6
10-4
-6
10
10-8
10-6
3 3
5 g/m H2O 10 -8 5 g/m H2O 5 g/m3 H2O
-10
10 8 9 10 11 12 8 9 10 11 12 8 9 10 11 12
Irradiance [W/(m2·mm)]

T = 300 K, 23 km Rural T = 1000 K, 23 km Rural T = 6000 K, 23 km Rural


10-4

10-6 10-4
-6
10

10-8 10-6
10-8
39 g/m3 H2O 39 g/m3 H2O 39 g/m3 H2O
-10
10
8 9 10 11 12 8 9 10 11 12 8 9 10 11 12
Wavelength [mm] Wavelength [mm] Wavelength [mm]

9.2.5 Calculate the current flowing through each of the detectors when
viewing the different targets through the various atmospheres. Us-
ing the moderate atmosphere as a baseline, calculate the ratios of
currents for the two adverse atmospheric conditions. Compare
your results with the following table (current values are given in
scientific notation and the ratio of currents are given in brackets).
The results will not be exactly the same but should be of the same
magnitude. [10]
Electro-Optical System Analysis 363

Temperature = 300 K
Atmosphere,
Silicon 3–6 µm 8–12 µm
H2 O content
23-km visibility 1.09 × 10−25 1.87 × 10−8 6.47 × 10−7
Rural, 5 g/m3 (1.000) (1.000) (1.000)
2-km visibility 1.09 × 10−25 1.84 × 10−8 6.42 × 10−7
Urban, 5 g/m3 (1.000) (0.982) (0.992)
23-km visibility 1.09 × 10−25 2.20 × 10−8 7.28 × 10−7
Rural, 39 g/m3 (1.000) (1.172) (1.124)
[A] (ratio) [A] (ratio) [A] (ratio)
Temperature = 1000 K
Atmosphere,
Silicon 3–6 µm 8–12 µm
H2 O content
23-km visibility 1.35 × 10−11 4.27 × 10−5 2.30 × 10−5
Rural, 5 g/m3 (1.000) (1.000) (1.000)
2-km visibility 5.97 × 10−13 2.63 × 10−5 1.60 × 10−5
Urban, 5 g/m3 (0.044) (0.617) (0.694)
23-km visibility 1.32 × 10−11 3.15 × 10−5 4.28 × 10−6
Rural, 39 g/m3 (0.972) (0.737) (0.186)
[A] (ratio) [A] (ratio) [A] (ratio)
Temperature = 6000 K
Atmosphere,
Silicon 3–6 µm 8–12 µm
H2 O content
23-km visibility 7.18 × 10−3 1.73 × 10−3 2.79 × 10−4
Rural, 5 g/m3 (1.000) (1.000) (1.000)
2-km visibility 2.33 × 10−4 1.06 × 10−3 1.91 × 10−4
Urban, 5 g/m3 (0.033) (0.614) (0.685)
23-km visibility 7.09 × 10−3 1.29 × 10−3 4.57 × 10−5
Rural, 39 g/m3 (0.988) (0.750) (0.164)
[A] (ratio) [A] (ratio) [A] (ratio)
9.2.6 Analyze the results obtained in the previous two questions. Com-
ment on the relevance of each respective sensor to observe each
target. Review the effect of the different atmospheric conditions
on the observed irradiance values. Make recommendations as to
which spectral bands to use for different sources and different
atmospheric conditions. For example, complete a table such as
shown below. [6]
364 Chapter 9

Best spectral band for target and atmosphere


300 K 1000 K 6000 K
Moderate atmosphere ? ? ?
Low vis atmosphere ? ? ?
High humidity atmosphere ? ? ?

Finally, conclude on the effect of path radiance contribution to the


total signature. [2]

9.3 Derive Equation (5.55) from Equation (5.58). [3]


9.4 Use Equation (5.55) to derive Equation (5.58). [3]
9.5 Use Equation (5.59) to derive Equation (5.60). [3]
9.6 Calculate the NETD of a thermal detector that is limited by temp-
erature-fluctuation noise and photon noise only (no other noise
sources). Both these two noise sources are present at the same
time in the detector. Use Equation (5.55) and Equation (5.59) to
derive an equation for the NEDT. Calculate and plot the NEDT
versus G with the following detector parameters: a square pixel
with dimensions 50 µm × 50 µm, absorption is 50%, pixel fill factor
is 0.8, optics f -number is f /1.5, frame rate is 25 Hz. The detector
does not have a cold shield. The background temperature is 300 K.
Plot the results for a detector temperature of 77 K, 195 K and 300 K.
[10]
9.7 Repeat problem 6 above, but add the effect of resistor Johnson
noise. Evaluate the NETD for different values of the resistor value.
[5]
Chapter 10
Golden Rules

The golden rule is that there are no golden rules.


George Bernard Shaw

10.1 Best Practices in Radiometric Calculation

Radiometric calculation can be rather tricky. The guidelines in this chapter


are offered as a ‘best practice’ to help readers avoid preventable mistakes.
Contrary to Mr. Shaw’s statement, there may be a need for golden rules in
radiometry!

10.2 Start from First Principles

Figure 10.1 summarizes almost everything one must remember when do-
ing radiometric calculations.
Always view the problem as an application of Figure 2.11 and Equa-
tion (2.31); repeated as Figure 10.1. All problems can be rooted in this sim-
ple model: start here, and extend in wavelength, medium effects and/or
geometry. In particular: (1) consider the source area as some surface in

θ0 θ1
dA0 R01
dA1

L0λ dA0 cos θ0 dA1 cos θ1 τ01 dλ


d3 Φ λ =
R201

Figure 10.1 Flux transfer between two elemental surfaces.

365
366 Chapter 10

Use this... ... or this


Ω0A1 Ω 1A 0

A0 A0
Ω0 Ω1

A1 A1

Never, ever this... ... or this!


Ω0A0 Ω1A1

A0 Ω0 A0 Ω1
A1 A1

Figure 10.2 Legal combinations of solid angle and source area.

space and the receiver area as some surface in space, and integrate over
both surfaces, (2) consider the spectral properties and integrate over wave-
length or wavenumber, and (3) consider medium effects such as transmit-
tance and path radiance.
Note that in the right of the flux transfer equation there are only two
radiometric quantities L and λ and one medium property τ01 ; the remain-
ing quantities are all geometric (nonradiometric) quantities. Radiometry
is therefore as much a study of geometry as it is of optical flux. Get the
geometry right, and the solution falls in place. If there is not a clear picture
of the geometry, the correct solution is out of reach.

10.3 Understand Radiance, Area, and Solid Angle

It is easy to get confused by source area and receiver area — when to use
which? The simple rule is to think of it, as if you are standing on one of
the two surfaces and you are viewing the other surface. You cannot view
the one you are standing on, you can only view the other. Your feet and
eyes cannot rest on the same surface. This is shown Figure 10.2.

10.4 Build Mathematical Models

Derive a mathematical model from Equation (2.31). Start simple and add
components as required by the problem at hand. Using the drawing as
input, add factors for the atmosphere, lenses, optical filters, detectors,
choppers, and amplifiers. Do not add factors for components that are
not specified in the problem statement.
Golden Rules 367

Table 10.1 SI base units.

Unit Unit Base Dimension


name symbol quantity symbol
meter m length L
kilogram kg mass M
second s time T
ampere A electric current I
kelvin K thermodynamic temperature Θ
mole mol amount of substance N
candela cd luminous intensity J

10.5 Work in Base SI Units

As early as possible, convert problem units to base SI units, 1,2 shown in Ta-
ble 10.1, or directly-derived units. The value of a physical quantity can be
expressed as the product of a numerical value (i.e., 4.3) and a unit (i.e., µm),
both of which are algebraic factors, that can be manipulated by the rules of
algebra. The symbol µm is related to the symbol m, by µm = 1 × 10−6 m,
thus µm/(1 × 10−6 ) = m. One can therefore write λ = 4.3 µm, or divide
both sides to obtain λ/µm = 4.3. Download and study the free IUPAC
Green Book 2 for more information.
It happens frequently that distances and altitudes are given in [km]
(e.g., the sun’s diameter and distance in Section 3.7), temperature is in
Celsius, or that detector D ∗ or sizes contains units of [cm]. Work in base SI
units, rather than problem-domain units. Convert to SI units at the earliest
possible time.
Some of the few exceptions to this rule are that wavelength is normally
−1 ∗
specified in [µm] or [nm],√ wavenumber is specified in [cm ], and D is
specified in units of [cm· Hz/W].

10.6 Perform Dimensional Analysis

Test derived equations by manipulating or calculating the units for each


of the variables. This is known as dimensional analysis 2,3 or homogenous
equation checking. 4–7 This is a very effective method to ensure that you
are using the correct areas in the flux calculation equations. It will also
ensure that the equations and data are matched correctly.
Not all length dimensions [L] have the same meaning even though
368 Chapter 10

they share the same SI unit: meter [m] (see Table 10.1 for dimensional
symbols). Strict adherence to standards requires the use of the dimension
symbols, such as [L] or [M], but using SI units [m] or [kg] works just as well.
Subscript-mark all length dimensions with the meaning and location of
such dimensions. For example, use [m20 ] or [L20 ] for source area, [m21 ] or [L21 ]
for receiver area, [m2d ] or [L2d ] for detector area, and [mR ] or [LR ] for range.
Use different subscripts for different surfaces, even if they all refer to area.
Note that solid angle has units of [m20 /m2R ] or dimensions of [L20 /L2R ] when
viewing the source (surface 0) from a sensor (surface 1). Once this is done,
do not ‘cancel’ different types of lengths in the dimensional analysis, i.e., a
ms cannot ‘cancel’ a md because they are different types of length. Ensure
that all appropriate units/dimensions are present and cancel correctly.
The dimensional analysis for Equation (2.31) is as follows:
   
L dA0 cos θ0 dA1 cos θ1 W · m2R & 2 ' & 2 ' 1
d Φ [W ] =
2
m0 m1 → [W ] ,
R2 m20 · m21 m2R
or when considering spectral variables,

Lλ dA0 cos θ0 dA1 cos θ1 dλ
d Φ [W ] =
2
λ R2
   
W · mR 2 & 2 ' & 2 ' 1 ( µm )
m0 m1 → [W ] .
m20 · m21 · µm m2R 1

Operations such as squares or square roots are also applied to units:


     
 Q A 1
in = 2qIB → [ A] .
1 1 s

Be especially aware of constants with units, particularly the nonsymbol


constants: they may not have symbols, but they certainly have units! Wave-
number is related to wavelength by λ = 104 /ν̃, where wavenumber has
units of [cm−1 ], and wavelength has units of [µm]. The constant 104
here has units of [cm−1 ·µm]. The conversion of spectral densities between
wavenumber and wavelength is given by

dλ 104 ( ) 1
 ( µm )
−1
=− 2 µm · cm → .
dν̃ ν̃ (cm−1 )2 cm−1

10.7 Draw Pictures

Draw a picture of the system and the spatial geometry of the problem, such
as shown in Figures 10.3 and 10.4. Write down what effect that component
Golden Rules 369

Source Emissivity Atmosphere Filter Optics Detector Amplifier

θ0 θ1 F i v
Zt
dA0 dA1

No No No
effect R01 Focal effect
effect length
or loss or loss or loss
Only flux from dA0,
not the whole object
R01 dA1 f
dA0

mR mf md x md
ms × ms
mo × mo

Figure 10.3 Diagram of a source, medium, and sensor.

For large, fixed distances For small, variable distances


compared to optics or source size: compared to optics or source size:
no need for complex integrals require complex integrals
over optics or source areas over optics or source areas

cos θ = 1
cos θ = 1 R
R
cos θ = 1

Source Optics Detector


Figure 10.4 Spatial relationships in source and sensor.
370 Chapter 10

Table 10.2 Component scaling and conversion.

Component Value Units Converts


source area 1 × 10−5 m2s radiance→intensity
emissivity λ - scales radiance spectrally
atmosphere τλ - scales radiance spectrally
range 2500 mR R2 : intensity→irradiance
filter τf - scales irradiance
optics area 1 × 10−6 m20 irradiance→flux
detector 1.6 A/W flux→current
amplifier 2.5 × 104 V/A current→voltage
optical focal length 0.1 mf forms sensor field of view
detector size 0.001 md forms sensor field of view
sensor FOV 1 × 10−4 m2d /m2f ω is the field of view
m2s /m2R ω: radiance→irradiance

has on the flow or unit conversion of the flux or signal, and what effect the
medium has on the flux/signal.
The drawing should specify which component provides the flux and
which component receives the flux. It is very easy to confuse the optical
aperture of a lens with the detector area as the receptor of flux.
Ensure that the drawing clearly shows not only the full extent or size
of the source object but also which part is visible to the sensor. The sensor
can only sense the flux radiating within its FOV — the object’s radiation
outside the sensor FOV does not contribute to the sensor signal.
For example, consider a system comprising a spectral source, an at-
mospheric medium, a lens, an optical filter, a detector, and an amplifier.
The diagram could look like Figure 10.3.
In this picture, pay detailed attention to the source and receiving ar-
eas; in particular, make sure which part receives the flux. How is the flux
transferred from one block to the next — are there losses or unit conver-
sions along the way? The picture should also indicate if the cosine factors
in Equation (2.31) can degenerate to unity or if these must be retained.
Figure 10.4 illustrates these concepts.
For each of the objects in the figure, write down the type of com-
ponent, value, the units, and the type of conversion taking place in the
component. Consider the units of the component, i.e., [A/W] means the
Golden Rules 371

dA0

R 01 dA0 cos θ

θb Solid angle of the optical


dA1 barrel is calculated as the
cos θ = 1 integral of dAo over the inside
of the barrel - this means
everywhere except over
the lens.

Figure 10.5 Spatial integration of source surface area.

component receives watts and outputs amperes. An area with units [m2 ]
can convert irradiance in [W/m2 ] to flux [W]. An example is shown in
Table 10.2.
It may also be beneficial to draw the shapes of the source and receiver
true to the real-world object (e.g., the barrel containing the optical elements
and the detector). Such a drawing willimmediately indicate if it is neces-
sary to perform a spatial integral A = A dA over one or both of the areas
in Equation (2.31).

10.8 Understand the Role of π

Remember when to remember π. See Section 2.7 for the relationship be-
tween exitance and radiance for a Lambertian radiator. When working
from first principles, there is no need to remember when to use π because
it is taken care of in the mathematics.
The scripting Planck-law functions given in Appendix D provide exi-
tance in [W/m2 ], not radiance in [W/(m2 ·sr)]. When using these functions,
divide the result by π to get radiance.

10.9 Simplify Spatial Integrals

Simplify spatial integrals where possible. In Figure 10.5 the solid angle of
the optical barrel is integrated over the inside of the box except over the
lens. On the assumption that the optical barrel has uniform radiance, the
barrel geometry can be ‘collapsed’ onto, and integrated over, the portion
of the sphere shown in the figure. The spherical portion is rotationally
symmetrical around the optical axis. In this case the projected solid angle
of the barrel is given by ω = π − π sin2 θb .
372 Chapter 10

1
Detector

Responsivity [A/W]
or Transmittance
Filter
Atmosphere

0.5

0
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2
Wavelength [mm]

Figure 10.6 Graphical depiction of spectral variables.

10.10 Graphically Plot Intermediate Results

To confirm visually that the calculation is correct, plot the calculated values
graphically and inspect the graphs very carefully. For example, Figure 10.6
displays filter and detector spectral responses that were calculated and the
atmospheric transmittance loaded from file; all are plotted to confirm that
no error was made.

10.11 Follow Proper Coding Practices

When you code your problem in a computer language, ensure that you
copy and update the code accurately. It happens too often that code is
copied but the variables are not updated in the new context. The better
solution is to use functions for repeating calculations — the benefit is that
a change made once will apply to all use cases, and varying data can be
clearly defined as function parameters.
Keep track of the value of constants’ exponents and the e-notation in
the computer scientific format: 10e4 is 10 × 104 = 105 and not 104 =
1 × 104 .

10.12 Verify and Validate

Even the most rudimentary radiometry calculation can go wrong. Find


ways to verify and validate the results. Section B.3 describes a modeling
verification and validation framework. In your calculation, identify the
conceptual model and computer (calculation) model and carefully qualify,
verify, and validate the processes and results along the calculation chain.
Several techniques are available 8,9 for this purpose.
Golden Rules 373

10.13 Do It Right — the First Time!

Refrain from taking shortcuts or out-of-the-expected actions when doing


an ‘initial’ investigation or calculation. These actions tend to be costly
in the long run. When executing a task, spend just a little more time to
safeguard your current action against future events. In many cases, you
would have to revisit the work later, or you might want to re-apply today’s
work products in another task. Do it right the first time!
In the context of this book, ‘doing it right — the first time’ means
developing a proper model from first principles instead of using a quickie
formula — quite possibly incorrectly! It means writing code instead of
repeatedly typing the same thing in a command window. It means creating
concepts, design packages, and computer code functions for re-use instead
of endless copy-and-paste repeats. It means developing a toolset today for
use tomorrow and the day after. Most importantly, it means documenting
and archiving your work, adding it to a living and growing repository. All
of this is done to obtain good return on your investment of time and effort.
Some reasons why a little more effort may be warranted: (1) There
may be a risk in fixing a task in future; better fix it while doing it the
first time. (2) There are additional costs in fixing a task later; you have
to repeat the start-up context-switching activities. (3) Do the ‘expected’ —
execute, document, and archive in a manner, place, or method that most
people would expect — because it will ease the work for others, and even
for yourself, later. (4) Keep in mind that you are most likely the future
client of your current work, so be kind to yourself!

Bibliography
[1] ISO, “Quantities and units – Part 1: General,” Standard ISO 80000-
1:2009, International Organization for Standardization (2009).

[2] Cohen, E., Cvitas, T., Frey, J., Holmström, B., Kuchitsu, K., Marquardt,
R., Mills, I., Pavese, F., Quack, M., Stohner, J., Strauss, H., Takami, M.,
and Thor, A., Quantities, Units and Symbols in Physical Chemistry , 3rd
Ed., IUPAC Green Book, IUPAC & RSC Publishing, Cambridge, UK
(2008).

[3] Wikipedia, “Dimensional analysis,” en.wikipedia.org/wiki/


Dimensional_analysis.

[4] Wikipedia, “SI Units,” en.wikipedia.org/wiki/SI.

[5] NIST, “SI base units,” physics.nist.gov/cuu/Units/units.html.


374 Chapter 10

[6] Wikibooks, “The SI System of Units,” en.wikibooks.org/wiki/


A-level_Physics/The_SI_System_of_Units.

[7] Wright, D., “Units, dimensions and conversion factors,” www.mech.uwa.


edu.au/DANotes/units/units.html.

[8] Schlesinger, S., “SCS Technical Committee on Model Credibility: Ter-


minology for Model Credibility,” Simulation 32, 103–104 (1979).

[9] Sargent, R. G., “Verification and Validation of Simulation Models,” Proc


1998 Winter Simulation Conference 1, 121–130 (1998).
Cornelius J. (Nelis) Willers completed a B.Eng
(Honns) Electronics Engineering degree at the Univer-
sity of Pretoria in 1976 and an MS (Optical Engineer-
ing) degree at the University of Arizona in 1983. He
is registered as a professional engineer. His 36 years
of work experience includes electro-optical system de-
velopment, system architecture and systems engineer-
ing, software development, and infrared scene sim-
ulation. His most notable achievements include be-
ing the chief architect and technical lead in establishing an imaging mis-
sile seeker technology base, and in the process, spearheading advanced
physics-based infrared image simulation. The simulation system is cur-
rently used for a number of different applications in laboratories across the
globe. His current interests include infrared signature measurement and
data analysis, infrared system modeling and simulation, and the devel-
opment of aircraft self-protection systems. He is leading the open-source,
Python-based pyradi radiometry toolkit project. He has published a large
number of technical and research reports. His conference paper topics
include infrared system modeling and simulation, and the modeling of
military conflict using agent-based techniques. He teaches radiometry and
infrared system design in short courses and at a masters-degree level at
the University of Pretoria.
Nomenclature

α Absorptance, absorptivity, absorption (fraction)


α Absorption attenuation coefficient with units [m−1 ]
αλ Spectral absorption with units [m−1 ]
αB Temperature coefficient of resistance with units [K−1 ]
β Diode p-n junction nonideal factor (unitless)
β Optical thickness (unitless)
γ Attenuation coefficient with units [m−1 ]
Γ Γ point: smallest energy difference in bandgap (condition)
δ() Dirac delta function (unitless)
Δ Spatial texture variation in emissivity (unitless)
Δρ Spatial texture variation in reflectivity (unitless)
ΔΦ Change in optical flux with units [W] or [q/s]
ΔΦe Change in radiant optical flux with units [W]
ΔΦ p Change in optical photon flux with units [q/s]
Δf Noise equivalent bandwidth with units [Hz]
Δne Change in number of electrons with units [quanta]
Δnh Change in number of holes with units [quanta]
ΔT Change in temperature with units [K]
 Emissivity (unitless)
ε Electric field across a distance with units [V/m]
λ Spectral emissivity (unitless)
η Detector quantum efficiency (unitless)
ηa ,ηb Image fill efficiency along the a and b directions (unitless)
ηs Scanning efficiency in an image-forming system (unitless)
θ Angle with units [rad]
θ Dimensional symbol for temperature, or thermal (unitless)
λ Wavelength with units [µm]
λc Cutoff wavelength with units [µm]
μ Carrier mobility with units [cm2 /(s·V)]
μe Electron carrier mobility with units [cm2 /(s·V)]
μh Hole carrier mobility with units [cm2 /(s·V)]
ν Frequency with units [Hz] or [s−1 ]
ν̃ Wavenumber with units [cm−1 ]
ρ Material density with units [g/m3 ]

xvii
xviii Nomenclature

ρ Reflectance, reflectivity, reflection (fraction)


ρλ Spectral reflection (unitless)
ρd Diffuse reflection (unitless)
ρs Specular reflection (unitless)
σ Material electrical conductivity with units [/m]
σ Scattering attenuation coefficient with units [m−1 ]
σ Surface roughness (root-mean-square) with units [m]
σe Stefan–Boltzmann constant with units [W/(m2 ·K4 )]
σq Stefan–Boltzmann constant with units [q/(s·m2 ·K3 )]
τ Transmittance, transmissivity, transmission (fraction)
τλ Spectral transmittance (unitless)
τθ Thermal time constant with units [s]
τa Atmospheric transmittance (unitless)
τc Contrast transmittance (unitless)
τe Electron lifetime with units [s]
τh Hole lifetime with units [s]
τRC Electronic resistor–capacitor time constant with units [s]
Φ Optical flux with units [W] or [q/s]
Φλ Optical flux spectral density with units [W/µm]
Φe Radiant optical flux with units [W]
Φp Optical photon flux with units [q/s]
Φq Optical photon flux with units [q/s]
ψ Solar irradiance geometry factor with units [sr/sr]
ψ Wave function for a free electron (unitless)
ω Electrial frequency with units [rad/s]
ω Geometric solid angle with units [sr]
ω Pixel field of view solid angle with units [sr]
Ω Projected solid angle with units [sr]
Ωr Field of regard in an image-forming system with units [sr]
A Area with units [m2 ]
Ad Detector area with units [m2 ]
As Source area in units [m2 ]
Av Voltage gain of an amplifier or filter with units [V/V]
BRDF Bidirectional reflection distribution function with units [sr−1 ]
c Specific heat with units [J/(g·K)]
c Speed of light in vacuum with units [m/s]
C Contrast (unitless)
C, Cs Thermal detector element heat capacity with units [J/K]
Cv Contrast threshold (unitless)
CODATA Committee on Data for Science and Technology
D Diameter of an optical aperture or lens with units [m]
D Detectivity with units [W−1 ]
Nomenclature xix

D Diffusion constant with units [m2 /s] √


D∗ Specific detectivity with units [cm· Hz/W]√
Dλ∗ Spectral specific detectivity with units [cm· Hz/W]


Deff Wideband specific detectivity with units [cm· Hz/W]
De Diffusion constant for electrons with units [cm2 /s]
Dh Diffusion constant for holes with units [cm2 /s]
e Electron with charge q with units [C]
E Energy (semiconductor energy level) with units [J] or [eV]
E Irradiance (Areance) with units [W/m2 ]
Eλ Irradiance (Areance) spectral density with units [W/(m2 ·µm)]
EC Lowest conduction band energy level with units [J] or [eV]
EF Fermi level with units [J] or [eV]
Eg Semiconductor energy bandgap with units [J] or [eV]
Eq Background photon flux with units [q/(s·m2 )]
EV Highest valence band energy level with units [J] or [eV]
f Electrical frequency with units of [Hz]
f Focal length with units [m]
F View factor or configuration factor with units [sr/sr]
f fill Fill factor, fraction of area filled (unitless)
FF Frame rate in an image-forming system with units [Hz]
fr Bidirectional reflection distribution function with units [sr−1 ]
FT Fourier transform
f −3 dB −3 dB electronic bandwidth with units [Hz]
f /# F-number, alternative notation (unitless)
F# F-number of a lens, with numerical value # (unitless)
FAR False alarm rate with units [s−1 ]
FOM Figure of merit
FOV Field of view with units [rad]
FTIR Fourier transform infrared
G Detector photon gain with units [electrons/photon]
G Heat conductance with units [W/K]
Gc Bias circuit gain (unitless)
G ph Photoconductive gain with units [electrons/photon]
gth Rate of thermal carrier generation with units [quanta/s]
h Planck constant with units [J·s]
h̄ h̄ = h/(2π) with units [J·s], where h is the Planck constant
i Current with units [A] √
i Noise current density with units [A/ Hz]
I Intensity (Pointance) with units [W/sr]

I Incident ray unit vector (unitless)
I0 Reverse-bias-saturation current with units [A]
Iλ Intensity (Pointance) spectral density with units [W/(sr·µm)]
xx Nomenclature

Ib Bias current with units [A] √


i gr Generation–recombination noise with units [A] or [A/ Hz]

in Noise current with units [A] or [A/ Hz]
I ph Photocurrent with units [A]
Isat Reverse-bias-saturation current with units [A]
J Diffusion current density with units [A/m2 ]
Jd Drift current density with units [A/m2 ]
k Boltzmann constant with units [J/K]
Kλ Spectral photopic luminous efficacy with units [lm/W]
Kλ Spectral scotopic luminous efficacy with units [lm/W]
Kμ Sky-ground radiance ratio in thermal spectral bands (unitless)
Kν Sky-ground radiance ratio in the visual spectral band (unitless)
kf Time-bandwidth product with units [s·Hz]
kF Reciprocal lattice sphere radius with units [m]
kn Ratio of noise equivalent bandwidth to −3 dB bandwidth
L Radiance (Sterance) with units [W/(m2 ·sr)]
Lλ Radiance (Sterance) spectral density with units [W/(m2 ·sr·µm)]
Lν Diffusion length for carriers with units [cm]
Le Diffusion length for electrons with units [cm]
Lh Diffusion length for holes with units [cm]
Lp Detector packaging inductance with units [H]
LWIR Long-wave infrared
m Mass with units [g] or [kg]
M Exitance (Areance) with units [W/m2 ]
Mλ Exitance (Areance) spectral density with units [W/(m2 ·µm)]
Me Radiant exitance with units [W/m2 ]
me Electron mass with units [g]
me∗ Effective electron mass in units of me
m∗h Effective hole mass in units of me
MDT Minimum detectable temperature with units [K]
MRT Minimum resolvable temperature with units [K]
MTF Modulation transfer function
MTV Magnesium-Teflon®-Viton®
MWIR Medium-wave infrared
n Electron concentration with units [cm−3 ]
n Index of refraction (unitless)
N Number of objects, pixels, or detector elements (unitless)
N Surface normal unit vector (unitless)
na Acceptor concentration with units [cm−3 ]
nd Donor concentration with units [cm−3 ]
ne Number of electrons (unitless)
nh Number of holes (unitless)
Nomenclature xxi

ni Intrinsic carrier concentration with units [cm−3 ]


nn Electron concentration in n-type material with units [cm −3 ]
np Electron concentration in p-type material with units [cm −3 ]
nr Real component of the complex index of refraction (unitless)
NA Numerical aperture (unitless)
NEΔρ Noise equivalent reflectance (unitless)
NEΔT Noise equivalent temperature difference with units [K]
NEE Noise equivalent irradiance with units [W/m2 ]
NEL Noise equivalent radiance with units [W/(m2 ·sr)]
NEM Noise equivalent exitance with units [W/m2 ]
NEP Noise equivalent power with units [W]
NER Noise equivalent reflectance (unitless)
NETC Noise equivalent target contrast with units [K]
NETD Noise equivalent temperature difference with units [K]
NIR Near infrared
OTF Optical transfer function
p Hole concentration with units [cm−3 ]
P(θ ) Scattering phase function (unitless)
Pd Probability of detection (unitless)
pn Hole concentration in n-type material with units [cm−3 ]
Pn Probability of false detection (unitless)
pp Hole concentration in p-type material with units [cm−3 ]
PSD Power spectral density with units [A2 /Hz] or [V2 /Hz]
PSF Point spread function (unitless)
q Absolute humidity with units [g/m3 ]
q Electron charge with units [C]
q Quanta, as in photon count (unitless)
Q Energy with units [W·s] or [J]
r Radius with units [m]
R Range or distance with units [m]
R Responsivity with units [A/W] or [V/W]

R Mirror reflection unit vector (unitless)
R Detector responsivity scaling factor with units [A/W] or [V/W]
R Equivalent path length with units [m]
R Normalized spectral shape of spectral responsivity (unitless)
Rλ Detector spectral responsivity with units [A/W] or [V/W]
R0 Dynamic resistance under zero-bias conditions with units [Ω]
Rd Detector resistance with units [Ω]
Reλ Spectral detector responsivity with units [A/W] or [V/W]
Rqλ Spectral detector responsivity with units [C] or [J/A]
Reff Effective (wideband) responsivity with units [A/W] or [V/W]
RL Load resistor or bias resistor with units [Ω]
xxii Nomenclature

RV Meteorological range (visibility) with units [km]


RH Relative humidity, unitless expressed as %
rms Root-mean-square (unitless)
S, S1 , S2 Seebeck coefficients for thermoelectricity with units [V/K]
S Sensor response

S Reflected ray unit vector (unitless)
Sλ Sensor spectral response (unitless)
S(ω ), S( f ) Power spectral density with units [A2 /Hz] or [V2 /Hz]
SCR Signal-to-clutter ratio (unitless)
SNR Signal-to-noise ratio (unitless)
SWIR Short-wave infrared
t Time with units [s]
T Temperature with units [K]
T Throughput or étendue with units [sr·m2 ]
Tb Background temperature [K]
Tfilter Temperature of an optical filter with units [K]
tp Signal pulse width with units [s]
Ts Source temperature with units [K]
TPM Technical performance measure
v Voltage (signal or noise) with units [V]
V Volume with units [m3 ]
Vλ Spectral photopic luminous efficiency (unitless)
Vλ Spectral scotopic luminous efficiency (unitless)
Vbias Bias voltage across a device with units [V]
Vd Internal potential in a p-n diode with units [V] √
vn Noise expressed as voltage with units [V] or [V/ Hz]
w Energy density with units [J/m3 ]
Appendix A
Reference Information

Table A.1 Definition of radiometric quantities.

Quantity Direction Symbol Definition Basic Unit


Energy Q - [W·s]
Flux Φ Φ = dQ dt [W]
Density w w = dQ
dV [W·s/m3 ]
d2 Q
Intensity (Pointance) exitent I I = dΦ
dω = dtdω [W/sr]
Exitance (Areance) exitent M M = dΦdA [W/m2 ]
Irradiance (Areance) incident E E = dΦ
dA [W/m2 ]
Radiance (Sterance) spatial L d2 Φ
L = dω cos [W/(m2 ·sr)]
θdA
where:
exitent Energy leaving the surface
incident Energy falling in on a surface
spatial Energy anywhere in space
time t seconds [s]
volume V volume [m3 ]
solid angle ω steradian [sr]
area A area [m2 ]
difference operator d,d2
Basic Unit is defined in any of: radiant [W]
photon rate [q/s]
photometric [lm]

375
376 Appendix A

Table A.2 Physical and mathematical constants. 1,2

Name Magnitude Units


Physical constants
h Planck’s constant 6.62606957 × 10−34 J·s
c Speed of light in vacuum 2.99792458 × 108 m/s
k Boltzmann’s constant 1.3806488 × 10−23 J/K
q Electron charge 1.602176565 × 10−19 C
Absolute zero (0 K) −273.15 ◦C

Melting point of ice ≈273.15 K


Triple point of water 273.16 K
Mathematical constants
π pi 3.14159265358979
e Base of the natural logarithm 2.71828182845905
ζ (3) Apéry’s constant 1.2020569031595942853
a2 Solution to 2(1 − e− x ) − x = 0 1.59362426004004
a3 Solution to 3(1 − e− x ) − x = 0 2.82143937212208
a4 Solution to 4(1 − e− x ) − x = 0 3.92069039487289
a5 Solution to 5(1 − e− x ) − x = 0 4.96511423174429
Reference Information 377

Table A.3 Planck-law constants. 3

Name Magnitude Units


Radiation constants
c1e First radiation constant (2πhc2 ) 3.74177152466413 × 10−16 W · m2
c1q First radiation constant (2πc) 1.88365156730885 × 109 q·m/s
c2 Second radiation constant (hc/k) 1.43877695998382 × 10−2 m· K
Radiation constants, λ expressed in spectral units of [µm]
c1eλ First radiation constant (2πhc2 ) 3.74177152466413 × 108 W·µm4 /m2
c1qλ First radiation constant (2πc) 1.88365156730885 × 10 27 q·µm3 /(s·m2 )
c2λ Second radiation constant (hc/k) 1.43877695998382 × 104 µm·K
Radiation constants, ν̃ expressed in spectral units of [cm−1 ]
c1eν̃ First radiation constant (2πhc2 ) 3.74177152466413 × 10−8 W·µm4 /m2
c1qν̃ First radiation constant (2πc) 1.88365156730885 × 1015 q·µm3 /(s·m2 )
c2ν̃ Second radiation constant (hc/k) 1.43877695998382 µm·K
Radiation constants, ν expressed in spectral units of [Hz]
c1eν First radiation constant (2πh/c2 ) 4.63227628074287 × 10−50 J·s3 /m2
c1qν 2
First radiation constant (2π/c ) 6.99098648422864 × 10−17 s2 /m2
c2ν Second radiation constant (h/k) 4.79924334848949 × 10−11 s·K
Wien’s displacement law
weλ 106 hc/( a5 k) 2897.77212130396 µm·K
wqλ 106 hc/( a4 k) 3669.70307542088 µm·K
weν̃ a3 k/(100hc) 1.96099843866962 cm−1 /K
wqν̃ a2 k/(100hc) 1.10762425613068 cm−1 /K
weν a3 k/h 5.87892542062926 × 1010 Hz/K
wqν a2 k/h 3.320573982858398 × 1010 Hz/K
Stefan–Boltzmann law
2k4 π5
σe Stefan–Boltzmann constant 15c2 h3
5.670373 × 10−8 W/(m2 ·K4 )
4ζ (3) k3
σq Stefan–Boltzmann constant h3 c 2
1.5204 × 1015 q/(s·m2 ·K3 )
Note: the q in the Units column is not an SI unit; it signifies quanta or photons.
378 Appendix A

Table A.4 Relative spectral efficiency for photopic and scotopic vision. 4,5

Wave- Photopic Scotopic Wave- Photopic Scotopic


length vision vision length vision vision
λ Vλ Vλ λ Vλ Vλ
380 4.000 × 10−5 5.890 × 10−4 590 7.570 × 10−1 6.550 × 10−2
390 1.200 × 10−4 2.209 × 10−3 600 6.310 × 10−1 3.325 × 10−2
400 4.000 × 10−4 9.290 × 10−3 610 5.030 × 10−1 1.593 × 10−2
410 1.200 × 10−3 3.484 × 10−2 620 3.810 × 10−1 7.370 × 10−3
420 4.000 × 10−3 9.660 × 10−2 630 2.650 × 10−1 3.335 × 10−3
430 1.160 × 10−2 1.998 × 10−1 640 1.750 × 10−1 1.497 × 10−3
440 2.300 × 10−2 3.281 × 10−1 650 1.070 × 10−1 6.770 × 10−4
450 3.800 × 10−2 4.550 × 10−1 660 6.100 × 10−2 3.129 × 10−4
460 6.000 × 10−2 5.672 × 10−1 670 3.200 × 10−2 1.480 × 10−4
470 9.100 × 10−2 6.756 × 10−1 680 1.700 × 10−2 7.160 × 10−5
480 1.390 × 10−1 7.930 × 10−1 690 8.200 × 10−3 3.533 × 10−5
490 2.080 × 10−1 9.040 × 10−1 700 4.100 × 10−3 1.780 × 10−5
500 3.230 × 10−1 9.817 × 10−1 710 2.100 × 10−3 9.140 × 10−6
510 5.030 × 10−1 9.966 × 10−1 720 1.050 × 10−3 4.780 × 10−6
520 7.100 × 10−1 9.352 × 10−1 730 5.200 × 10−4 2.546 × 10−6
530 8.620 × 10−1 8.110 × 10−1 740 2.500 × 10−4 1.379 × 10−6
540 9.540 × 10−1 6.497 × 10−1 750 1.200 × 10−4 7.600 × 10−7
550 9.950 × 10−1 4.808 × 10−1 760 6.000 × 10−5 4.250 × 10−7
560 9.950 × 10−1 3.288 × 10−1 770 3.000 × 10−5 2.413 × 10−7
570 9.520 × 10−1 2.076 × 10−1 780 1.500 × 10−5 1.390 × 10−7
580 8.700 × 10−1 1.212 × 10−1
nm nm
Reference Information
Table A.5 Infrared detector materials (1). 7 8

Parameter Material
Ge InSb GaAs InP
Bandgap Eg [eV] 0.742 0.235 1.519 1.4236
Refraction index 4 3.3 4 3.1
Varshni parameter A [eV/K] 0.48 × 10−3 0.5405 × 10−3 0.32 × 10−3 0.363 × 10−3
Varshni parameter B [K] 235 170 204 162
Electron mobility [cm2 /(V·s)] at 300 K ≤3900 ≤7.7 × 104 ≤8500 ≤5400
Hole mobility [cm2 /(V·s)] at 300 K ≤1900 ≤850 ≤400 ≤200
Electron lifetime τe [s] 10−3 10−10 5 × 10−9 10−8
Hole lifetime τh [s] 10−3 10−6 2.5 × 10−7 10−6
Electron effective mass me∗ /m0 0.22 0.0135 0.067 0.0795
Hole effective mass m∗h /m0 0.33 0.43 0.45
Lattice constant a0 [Å] at 300 K 5.6557 6.4794 5.65325 5.8697
Lattice constant temperature coefficient da0 /dT [Å/K] 3.48 × 10−5 3.88 × 10−5 2.79 × 10−5
Electron diffusion constant [cm2 /s] ≤100 ≤2 103 ≤200 ≤130
Hole diffusion constant [cm2 /s] ≤50 ≤22 ≤10 ≤5

379
380
Table A.6 Infrared detector materials (2). 9 10

Parameter Material
Si Ge InSb HgCdTe GaAs
(x=0.22)
Bandgap Eg at 300 K [eV] 1.107 0.67 0.163 0.102 1.35
Bandgap Eg temp. coeff. dE0 /dT [eV/K] −2.3 × 10−4 −3.7 × 10−4 −2.8 × 10−4 +3.0 × 10−4 −5 × 10−4
Electron mobility μe [cm2 /(V·s)] 1900 3800 78000 3 × 105 at 77 K 8800
Hole mobility μh [cm2 /(V·s)] 500 1820 750 1 × 103 at 77 K 400
Electron mobility coeff. dμe /dT [cm2 /(V·s·K)] -2.6 -1.66 -1.6 -1
Hole mobility coeff. dμh /dT [cm2 /(V·s·K)] -2.3 -2.33 -2.1 -2.1
Electron diffusion constant [cm2 /s] 35 100 220
Hole diffusion constant [cm2 /s] 12.5 50 10
Intrinsic carrier concentration [cm−3 ] 1.38 × 1010 2.5 × 1013 2 × 1013 2 × 106
Electron effective mass me∗ /m0 1.1 0.55
Hole effective mass m∗h /m0 0.56 0.37
Refraction index near bandgap 3.42 4 4 3.65
Lattice constant a0 [Å] at 300 K 5.43072 5.65754 6.47877 6.47 5.65315
Absorption coefficient @ Eg αλc [m−1 ] 8 × 104
Absorption coefficient α0 [m−1 eV−1/2 ] 1.9 × 106
Bandgap EΓg at 0 K [eV] from Piprek 10 4.34 0.8893 0.235 1.519

Appendix A
Varshni parameter AΓ[eV/K] from Piprek 10 0.391 × 10−3 0.6842 × 10−3 0.32 × 10−3 0.5405 × 10−3
Γ
Varshni parameter B [K] from Piprek 10 125 398 170 204
Reference Information 381

0.9

0.52 mm (green)
0.8 0.53 mm
0.54 mm
0.51 mm
Monochromatic colors
0.7 0.55 mm (lime)

0.56 mm
0.6 Color coordinates in
unshaded area are valid
0.57 mm
0.50 mm
0.5 0.58 mm(yellow)
y

Temperature in K 3000 2500 0.59 mm


0.4 2000
4000
1500 0.60 mm (orange)
6000
1000
0.3 0.49 mm
10000 (white)
(cyan)
500 0.7 mm
(red)
1´1010 Planck-law locus
0.2

0.48 mm
(purple)
0.1
0.47 mm Color coordinates in
(blue) shaded area are invalid
0.4 mm (violet)
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
x
Figure A.1 CIE xy color chart. See Wikipedia 6 for a color rendition.
382 Appendix A

Transmittance with varying visibility


1.0

H2O

H2O
H2O
CO2
Rural aerosol
Transmittance

Path 1 km
Visibility 20 km
27 °C
0.5 Visibility 5 km 75% RH
Sea level
Visibility 1 km
0.0
0.1 1.0 10 [mm]
Transmittance with different aerosol types
1.0
Urban 5-km vis, 0.5-mm particles
Transmittance

Path 200 m
Naval aerosol 0.6-km vis, 4-m/s wind 27 °C
0.5 85% RH
Fog 0.5-km vis, 3-mm particles Sea level

0.0 Fog 0.2-km vis, 10-mm particles


0.1 1.0 10 [mm]

Transmittance with different absolute humidities


1.0
6 g/m3
Transmittance

Path 1 km
90% RH
T = 5 °C, H2O = 6 g/m3 19 g/m3
0.5 Sea level
T = 24 °C, H2O = 19 g/m3 No aerosol
41 g/m3
T = 38 °C, H2O = 41 g/m3
0.0
0.1 1.0 10 [mm]

Transmittance with different rainfall rates


1.0
Transmittance

Rate = 0 mm/hr Path 1 km


75% RH
0.5 27 °C
Rate = 5 mm/hr
Sea level
Rate = 25 mm/hr
0.0
0.1 1.0 10 [mm]

Transmittance at different altitudes


1.0
Transmittance

Path 10 km
10 000 m
75% RH
0.5 3 000 m 27 °C
Rural aero
1 000 m 23 km vis
0.0
0.1 1.0 10 [mm]
Visible Near IR SWIR MWIR LWIR
0.4 0.7 0.7 1.5 1.5 2.5 3 5 8 12 mm

Figure A.2 Atmospheric transmittance for different climatic conditions.


Reference Information 383

Bibliography
[1] Mohr, P. J., Taylor, B. N., and Newell, D. B., “CODATA recommended
values of the fundamental physical constants: 2010,” Rev. Mod.
Phys. 84(4), 1527–1605 (2012) [doi: 10.1103/RevModPhys.84.1527].

[2] SciPy, “SciPy Reference Guide: Constants (scipy.constants),” http:


//docs.scipy.org/doc/scipy/reference/constants.html#codata2010.

[3] SpectralCalc, GATS Inc., “Radiance: Integrating the Planck Equation,”


http://www.spectralcalc.com/blackbody/integrate_planck.html.

[4] Colour & Vision Research Laboratory, “Colour and Vision Database,”
http://www.cvrl.org/index.htm.

[5] Pyradi team, “Pyradi data,” https://code.google.com/p/pyradi/


source/browse.

[6] Wikipedia, “CIE 1931 color space,”


http://en.wikipedia.org/wiki/CIE_1931_color_space.

[7] Levinshtein, M., Rumyantsev, S., and Shur, M., Handbook Series for
Semiconductors Parameters, World Scientific (1996).

[8] Vurgaftmann, I., Meyer, J. R., and Ram-Mohan, L. R., “Band Parame-
ters for III-V Compound Semiconductors and their Alloys,” Journal of
Applied Physics 89(11), 5815–5875 (2001).

[9] Dereniak, E. L. and Boreman, G. D., Infrared Detectors and Systems,


John Wiley & Sons, New York (1996).

[10] Piprek, J., Semiconductor Optroelectronic Devices: Introduction to Physics


and Simulation , Academic Press, San Diego, CA (2003).
INDEX

Index Terms Links

Symbols

α, see absorptance
α, see absorption attenuation coefficient
β, see optical thickness
γ, see attenuation coefficient
Δf, see noise equivalent bandwidth
ϵ, see emissivity
η, see quantum efficiency
ηa, ηb, see image fill efficiency
ηs, see scanning efficiency
λ, see wavelength
λc, see cutoff wavelength
ν, see frequency, optical
, see wavenumber
ρ, see reflectance
σ, see scattering attenuation coefficient
σ, see surface roughness
σe, see Stefan–Boltzmann constant
σq, see Stefan–Boltzmann constant
τ, see transmittance
Φ, see flux
ψ, see sun geometry factor
ω, see solid angle, geometric
Ω, see solid angle, projected
Ωr, see field of regard
Cυ, see contrast threshold
D, see pupil diameter

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

D*, see specific detectivity


dA, see elemental area
E, see irradiance
Eg, see bandgap
f, see electrical frequency
f, see focal length
F, see spatial view factor
F#, see f-number
fr, see bidirectional reflection distribution function
f–3 dB, see bandwidth, -3 dB
h, see Planck constant
hν, see photon energy
I, see intensity
Iph, see photocurrent
Isat, see reverse-bias-saturation current
Kλ, see photopic efficacy
kf , see time-bandwidth product
kn, see noise equivalent bandwidth
L, see radiance
M, see exitance
n, see index of refraction
Pd, see probability of detection
Pn, see probability of false detection
q, see absolute humidity
q, see quanta
R, see responsivity
RV, see meteorological range
S, see sensor response
Vλ, see photopic vision
Vλ, see scotopic vision
Zt, see detector preamplifier gain

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

aberrations 232–235
astigmatism 232
chromatic 232
comatic/coma 232
distortion 235
field curvature 235
spherical 232
absolute humidity 123
absorptance
attenuation coefficient 99 110
detector 242
Kirchhoff’s law 69
material property 27
absorption coefficient
direct transition materials 177
extrinsic semiconductor 178 183
free-carrier 177
indirect transition materials 177
intrinsic semiconductor 178 183
refractive index 176
spectral 177
typical curves 178
Urbach tail 177
advanced model, see lifecycle phases
aerosols 112–116
atmospheric transmittance 113
land 112
manmade 112
maritime 112
meteorological range 127

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

aerosols (Cont.)
Mie scattering 116
Rayleigh scattering 115
scattering attenuation coefficient 128
afocal optics 236 237
aliasing 146 391 396–398
angle
factor, see spatial view factor
linear 27
solid 28–35
aperture stop 222 232
approximation
bandgap 139
BRDF 81
grey body 285
layered atmosphere 104
Planck law 64
responsivity 415
scattering 114
scotopic efficiency spectral shape 46
Seebeck coefficient 160
solid angle 33 437 441
thin lens 221 225–227
time bandwidth 150
transmittance 108
area
clear aperture 242
dimensional analysis 367
elemental 19
estimation of a flame 288–290 352
example calculations 316 441–447
pixel footprint 269

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

area (Cont.)
projected 28–35
solid angle 28–35 366
spatial integral 407–409
sun 316
areance, see irradiance and exitance
aspheric lens 237
assumption management 11
atmosphere 108–128
absolute humidity 123
aerosols 112
attenuation 108 110
composition 108
contrast transmittance 124–127
definitions 109
effect on image 268–272
effective transmittance 107
looking up/down 121
meteorological range 127
Mie scattering 116
molecular
absorption 111–112
constituents 111
transmittance 113
overview 110
path radiance 118–121 283
LWIR band 120
MWIR band 119
NIR band 118
visual band 118
radiative transfer codes 129
Rayleigh scattering 115

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

atmosphere (Cont.)
relative humidity 123
scattering 112 127
scattering modes 114
sky radiance 283 398–401
standard profiles 109
transmittance 113 382
water vapor content 121
windows 116
LWIR band 117
MWIR band 117
NIR band 117
visual band 116
attenuation
atmosphere 108
coefficient 98–99
avalanche detector 198

background 256
background-limited operation 147 183 192
205 211
baffle 223
band-limited noise 142
bandgap 138–139
Varshni approximation 139
bandwidth
– 3 dB 262
Butterworth filter 263
noise equivalent 262
best practices 365–373

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

bidirectional reflection distribution


function (BRDF) 80–83
Cook–Torrance model 82
diffuse reflection 81
measurements 83
mirror reflection 81
modeling approach 82
Phong model 82
reflection signatures 284
specular reflective surface 327
surface roughness 76
blackbody
aperture 75
curves 68 69 72
definition 59
emissivity 65
Kirchhoff’s law 70
laboratory instrument 59
Lambertian source 41
Planck’s law 60–62
Stefan–Boltzmann law 63
Wien’s displacement law 62
Bloch functions 170
bolometer 155–157
construction 155
noise 157
responsivity 156
Boltzmann probability distribution 58
book website xxv 411
Bouguer’s law 98
optical thickness 103

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Bouguer’s law (Cont.)


transmittance approximation 108
transmittance scaling 108
Bravais lattice 164
Bunsen burner flame case study
data analysis 350–355
instrument calibration 346–348
measurements 348–350
workflow 345–346
Butterworth filter 262

carrier lifetime 179


case study
Bunsen burner flame 344–355
cloud model 297–300
flame sensor 309–311
flame-area estimation 288
high-temperature flame measurement 295
infrared scene simulation 385–401
infrared sensor radiometry 337–344
laser rangefinder range equation 321–330
low-emissivity surface measurement 295
object appearance in an image 311–314
solar cell 315–321
sun-glint 302
temperature cross-over 300
thermal camera sensitivity 334–337
thermal imaging sensor model 330–334
thermally transparent paints 301
Cassegrain telescope 236 237
solid angle worked example 448

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

cavity 57 74
emissivity 74
reflectance 74
chief ray 224 230
cloud model case study
measurement 297
model 298–300
relative signature contributions 300
silver-lining factor 298
®
worked example in Matlab 451
clutter 256
CODATA constants, see constants
cold finger 337
cold shield 338
design 342
efficiency 341 342
collimator 238–239
color
coordinates 48–51
worked example Python™ 430
normalization 48
Planckian locus 49
Ratio 291 398–401
sensitivity to source spectrum 49
space, CIE 1931 48
xy chart, CIE 381
coma 234
complex lens, see thick lens
concept study, see lifecycle phases
conduction band 168
conductors 170
energy bands 171

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

configuration factor, see spatial view


factor
conservation of radiance 35–37
constants
CODATA 65
mathematical 376
physical 376
Planck law 66 377
contrast
difference 271
inversion 300
radiometric 272
reduction 102
signature 398–401
threshold
Koschmieder 127
World Meteorological Organization 127
transmittance 103
atmosphere 124–127
conversion
radiometric to photometric 47
spectral quantities 26
convolutio n 265–267
Cook–Torrance BRDF model 82
3
cos 32 449
4
cos 33
cryogenic coolers 185
Joule–Thomson 185–186
Stirling 186
crystalline materials 163–179
acceptor doping 172
basis 164

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

crystalline materials (Cont.)


conductors 170
donor doping 172
energy bands 165–170
insulators 170
lattice 164
n-type material 171
p-type material 172
pentavalent 171
photon-electron interactions 174–176
physical parameters 379–380
semiconductors 170
band structure 169–170
intrinsic and extrinsic materials 171–174
light absorption 176–178
structure 164
tetravalent 170 171
cutoff wavelength 138

data analysis 292–295


imaging-camera example 350–355
workflow example 345–346
definition study, see lifecycle phases
design 2 3
prerequisites 3
process 12
review 6
trade off 1
detection
probability 259 272
probability of false 260

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

detection (Cont.)
pulse 272–275
pulse example calculation 436
range 267–268 326
range example calculation 326–327
detectivity 147–149 258
specific 148 183 184
205 258
detector
avalanche 198
conductivity 188
configurations 140
cooling 183–187
gas/liquid cryogen 185
radiative 185
thermo-electric 185
cutoff wavelength 138
detection process 136–140
detectivity, see detectivity
dewar 185
effective responsivity 148
filter 338
history 135–136
intrinsic material 173
material parameters 379–380
noise 140–150 183
normalized spectral responsivity 140 243
peak responsivity 140 243
performance modeling 207–210
photoconductive 179 187–193

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

detector (Cont.)
photon 138–140
detection process 179–183
quantum efficiency 181–183
photovoltaic 179 193–207
preamplifier gain 243
signal voltage 243
spectral responsivity 182 243
technology impact 210–212
thermal 136–138 151–163
wideband responsivity 261
detector-limited operation 205 206
development
optronic sensor systems 385–386
parallel activities 7
phase, see lifecycle phases
product 4
development model, see also lifecycle
phases
dewar 185
difference
contrast 271–272
noise equivalent temperature (NETD) 247 259 332–333
operator 19
diffuse
reflectance 76 81
example visual spectra 50
Phong BRDF model 82
signature components 279–283
reflectance, Phong BRDF 82
shape factor, see spatial view factor

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

dimensional analysis 367–368


example 454 461 462
discrete ordinates 104
distortion 234
domain
space 256
time 256
doped materials
acceptor doping 172
concentrations 174
donor doping 172
Duntley equations 101

effective, see also normalization


detector responsivity 148
mass
electron 379
hole 379
transmittance 105–108
example humid atmosphere 124
example various sources 107
scaling with range 108
simulation application 393
value normalization 261–262
efficacy
photopic 47
scotopic 47
total luminous 47
efficiency
cold shield 341
image fill 331

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

efficiency (Cont.)
photopic 47
quantum 139
relative luminous 46
human eye 47 378
scanning 330
scotopic 47
spectral shape approximation 46
solar cell 318
Einstein equation 180
electrical frequency 141
electro-optical system
analysis
example 309–364
pyradi toolkit 411
definition 14
examples 15
functions 221
high-level design 15
major components 14
modeling and simulation 16
multispectral 40
simulation application 385–401
electromagnetic
radiation 20–22
particle model 20
wave model 20
spectrum 21
electron-hole pair 179
elemental area 19

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

emissivity 65–74
absorptivity 69
atmosphere 120 121
blackbody 59
cavity 74
definitions 70
directional 83–86
example curves 85
in nature 85
gas radiator source 103 310
grey body 71
Kirchhoff’s law 69
low 73
measurement 295–296
path radiance 101–103
practical estimation 287–288 344–355
spectral 71
hemispherical 84
temporal variation 390
thermally transparent paint 301 302
energy bands 165–170
bandgap 165
thermal carrier excitation 183
conduction band 168
Fermi level 166 168
Fermi–Dirac distribution 166
interband transitions 174–176
intraband transitions 174 175
orbitals 165
photon-electron interactions 174–176
semiconductor 169–170
valence band 168

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

energy bands (Cont.)


wave model 166–169
Bloch functions 170
density of states 166
wave function 167
equivalent path length 99
étendue, see throughput
exitance 24
Lambertian source 41–42
luminous 23
noise equivalent (NEM) 247 259
photon 23 60
Planck’s law 59–62
temperature derivative 60–62
radiant 23 38
relation to radiance 41
source shape 44–45
Stefan–Boltzmann law 63
Wien’s displacement law 62
experimental model, see lifecycle phases
extended target 232 311–314
extinction coefficient, see attenuation
coefficient
extrinsic
detector, see photon detector
detector material 173
eye spectral response 46

1/ f noise 142 145


photoconductive detectors 191
power spectral density 145

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

false alarm rate (FAR) 260


pulse detection 272–275
®
calculation in Matlab 436
calculation in Python™ 436
example calculation 273–275
Fermi level 166 168
Fermi–Dirac distribution 166 167 173
ferroelectric effect 157–158
field
angle 224 240
curvature 234 235
of regard 330
of view (FOV) 227–232 240
small angle 241
stop 223 226–232
figures of merit, see performance measures
fill factor 317
filter
absorption 240
antisolar 297
Butterworth 262
interference 240
multi-spectral 39–41
optical 240
passband 240
spectral 240
function 413–415
spectral response 223 240
stopband 240
transmittance 240

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

flame
area calculation in Matlab® 434
Bunsen burner 344–355
sensor 309–311
®
worked example Matlab 417
worked example Python™ 421
temperature measurement 295
fluctuation noise 146–147
background flux 147
signal flux 146
flux 24
collecting solid angle 230 231
Lambertian source 41
luminous 23
photon 23
radiant 23
system throughput 249
transfer 35–41 70
geometrical construction 36
lossless medium 37–38
lossy medium 38
multi-spectral 39–41
radiative transfer equation 101
worked example 448–451
f -number (f /#) 229–230
clear aperture area 242
optics diameter 242
focal
length 224
plane 223 224
folded optics 236 237
foreground 256

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

frequency
electrical 141
optical 20
relation to wavelength 20
response
photoconductive detector 190–191
photovoltaic detector 202–203
Fresnel reflectance 77–79
gold surface 85
full-width-half-maximum (FWHM) bandwidth 150

gaseous radiator 70–73 103–104


see also flame
simulation 389
generation–recombination (g-r) noise 144–145
photoconductive detectors 191–192
power spectral density 145
rms noise current 144
golden rules 365–373
Gregorian telescope 236 237
grey body 71–73

I-V curve 196


image 221 268
collimated, see collimator
contrast 102–103 271
flux-collecting solid angle 230
focal plane 223

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

image (Cont.)
modulation transfer function, see
modulation transfer function
(MTF)
object appearance 311–314
object relationship 225
optical aberrations, see aberrations
pixel irradiance 268–271
pixels 268
plane 223 227 246
field stop 227
pupil 227
vignetting 227
point spread function, see point
spread function (PSF)
probability of detection 274
ray tracing 225
rendering 391–398
resolved object 268–271
simulation, see infrared scene simulation
spatial sampling, see aliasing
unresolved object 268–271
image fill efficiency 331
index of refraction 20
atmosphere 97
chromatic abberation 232
complex 78 176
Fresnel reflectance 78
imaginary component 176
metal 78
numerical aperture (NA) 229
real component 176

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

index of refraction (Cont.)


Snell’s law 176
wave equation 176
industrialization, see lifecycle phases
infinite conjugates 224 229
clear aperture 242
collimator, see collimator
f -number 229
optics diameter 242
infrared scene simulation 385–401
application 387
benefits 385
image rendering, see rendering
OSSIM 393
radiometric accuracy 392
rendering equation 393–396
effective transmittance 394
signature model 393
spectral calculation 394
spectral discretization 393
wideband calculation 395
scene model
atmospheric attenuation 390
geometry 388
optical signature 388
temperature 390
temporal variation 390
texture 390
inhomogeneous medium 104
insulators 170

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

intensity 24
Lambertian source 42
luminous 23 46
photon 23
radiant 23 38
interface electronics noise 146
intrinsic carrier concentration 174
intrinsic detector, see photon detector
irradiance 24
apparent 244 265
in an image 268–271
see also object appearance in an image
luminous 23
noise equivalent (NEE) 246 258
see also laser rangefinder example
photon 23
pixel 268
radiant 23 37
isolators
energy bands 171

Johnson noise 142–143


frequency spectrum 143
interface electronics 146
photoconductive detectors 191–193
photovoltaic detectors 204
power spectral density (PSD) 143

Kirchhoff’s law 69

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

knowledge management 386


Koschmieder 127
Kubelka–Munk theory 100

laboratory
blackbody 59 75
collimator 238–239
Lagrange invariant 249
Lambertian source 41–42
flux, exitance, radiance 41
blackbody 41
definition 41
intensity 42
projected solid angle 42
reflectance 76 81
reflected sun radiance 87
shape 44–45
signature model 279–283
spatial view factor 43
view angle 42
laser rangefinder
detection range 326
example calculation
range equation 326–327
signal-to-noise ratio (SNR) 274
threshold-to-noise ratio (TNR) 274
Lambertian reflective surface 323–325
noise equivalent irradiance (NEE) 321
range equation case study 321–330
signal irradiance 322
specular reflective surface 327–330

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

laser rangefinder (Cont.)


target optical cross section 324
lifecycle phases 4–7
light models 22
light traps 76
linear angle 27 28
long-wave infrared (LWIR) 65
atmospheric aerosol scattering 127–128
atmospheric window 117
contrast transmittance 125
path radiance 120
luminance 25 46–48
photopic 47
scotopic 47

marginal ray 224 229 230


material properties 27
®
Matlab 409
measurement
bidirectional reflection distribu-
tion function (BRDF) 76 83
cloud 297
data analysis 292–295
flame example 348–350
instrument calibration 346–348
linear angle 27
spectroradiometer 287
technical performance 255
temperature 73 290–292 295
354

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

medium 14
absorption attenuation coefficient 99
atmosphere 108–128
attenuation coefficient 99
conducting 78
discrete ordinates 104
equivalent path length 99
homogeneous 98
index of refraction 20
inhomogeneous 99 104–105
lossless 37–38
lossy 38
optical 98–104
optical thickness 103
path radiance 99–103
scattering attenuation coefficient 99
transmittance 38 98 108
medium-wave infrared (MWIR) 65
atmospheric aerosol scattering 127–128
atmospheric window 117
contrast transmittance 125
path radiance 119
mesopic vision 46
meteorological range 127
microbolometer 156–157
Mie scattering 116
minimum detectable temperature (MDT) 259
minimum resolvable temperature (MRT) 259
model 12
atmospheric 129
BRDF, see bidirectional reflection
distribution function (BRDF)

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

model (Cont.)
cloud 297–300
detector 207–210
example 208
discrete ordinates 104
electromagnetic wave 20
imaging sensor 240–245 337–344
light 22
photon particle 22
photovoltaic detectors circuit 200
signature 279–283
solar cell 319–321
solar irradiance 86
source–medium–sensor 14
validation 275
modeling and simulation (M&S) 7 16 385–401
Modtran™
description 129
meteorological range 127
visibility 127
modulation transfer function (MTF) 236 260
multi-spectral 39–41

near-infrared (NIR) 65
atmospheric window 117
path radiance 118
Phong BRDF parameters 285
noise 245–247 256
bolometer 157
considerations in imaging systems 146

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

noise (Cont.)
equivalent
bandwidth 149–150 262
exitance (NEM) 247 259
irradiance (NEE) 246 258
power (NEP) 147–149 246 258
radiance (NEL) 247 258
reflectance (NER) 259
target contrast (NETC) 335–337
temperature difference (NETD) 247 259 332–333
1/ f 145
fluctuation 146–147
generation–recombination (g-r) 144–145
interface electronics 146
Johnson 142–143
Nyquist, see Johnson noise
photoconductive detectors 191–193
photovoltaic detectors 203–207
physical processes 140
power spectral density 141–142
pyroelectric detector 159
shot 143–144
system 141
temperature-fluctuation 145–146
thermal, see Johnson noise
thermoelectric detector 161
time-bandwidth product 150
normalization 261–263
color coordinates 48
effective value 261–262
peak 262

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

normalization (Cont.)
spatial 29 261
weighted mapping 263
normalized spectral responsivity 243
n-type material 171
electron concentration 174
numerical aperture (NA) 229 230

object
appearance in an image 311–314
worked example Python™ 424
resolved 268
unresolved 268
open-circuit operation 198 200
optics 223–236
aberrations 232–235
aperture 226
aspheric lens 237
axis 224
chief ray 224 230
collimator 238
conjugates 224
elements 222–224
field
angle 224
stop 226 230
flux collecting 230
f -number 229 230
focal
length 224
plane 223 224

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

optics (Cont.)
frequency 20
infinite conjugates 224 229
marginal ray 224 229 230
medium 97–104
modulation transfer function (MTF) 236
numerical aperture (NA) 229 230
point spread function (PSF) 235
power 223
principal plane 224
pupil 226–230
ray tracing 225
signature 279–292
model 279–283
rendering 387
spectral filter 240
stray light 227
system 236
afocal 236
Cassegrain 236
Gregorian 236
refractive 236
thick lens 225
thickness 103
thin-lens approximation 224 225
transfer function (OTF) 236 260
vignetting 227 238
Optronics System Simulation (OSSIM) 393
orbitals 165

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

paraxial approximation, see thin-lens


approximation
particle model 20
passband 240
path radiance 99–103 118–121
Duntley equations 100
emissivity 101–103
Kubelka–Munk theory 100
LWIR band 120
MWIR band 119
NIR band 118
visual band 118
Pauli’s exclusion principle 165 166 176
peak responsivity 243
Peltier effect 151 186
performance measures 10 255–261
definition 256
detectivity 258
false alarm rate (FAR) 260
minimum
detectable temperature (MDT) 259
resolvable temperature (MRT) 259
modulation transfer function (MTF) 260
noise equivalent
exitance (NEM) 259
irradiance (NEE) 258
power (NEP) 258
radiance (NEL) 258
reflectance (NER) 259
temperature difference (NETD) 259

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

performance measures (Cont.)


optical transfer function (OTF) 260
point spread function (PSF) 260
probability of detection 259
probability of false detection 260
role 255
signal-to-clutter ratio (SCR) 257
signal-to-noise ratio (SNR) 257
specific detectivity 258
Phong BRDF model 82
phonon 175–177
photoconductive detector 179 187
bias circuitry 189–190
conductivity 188
frequency response 190–191
geometry 189
noise 191–193
generation–recombination (gr) 192
Johnson 192–193
photoconductive gain 190
quantum efficiency 187
responsivity 189
signal 187–189
photocurrent 179 197 200
202 204 209
photodiode, see photovoltaic detector
photoemissive detector, see photon detector
photometry 23 45–51
units 45
photon 22
absorption 176–178
absorption coefficient 177–178

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

photon (Cont.)
detector 138–140
noise, see noise
operation 179
quantum efficiency 139
responsivity 139
electon interactions 174
energy 22
wave packet 22
photopic
efficacy 47
efficiency 47
luminance 47
relative spectral efficiency 378
vision 46
photovoltaic detector 179 193
background flux 204
background-limited operation 205
bias configurations 197–202
circuit model 200
open-circuit 200–202
reverse 198–200
short-circuit 202
construction 194
depletion region 194
detector-limited operation
open-circuit mode 206–207
short-circuit mode 205–206
diffusion current 197 204
energy diagrams 195
frequency response 202–203
I-V curve 196–197

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

photovoltaic detector (Cont.)


noise 203–207
Johnson 204
shot 204
noise equivalent power (NEP) 204
optimal power transfer 202
photocurrent 204
p-n junction 194
quantum efficiency 196
resistance 204
responsivity 196
reverse-bias-saturation current 197
specific detectivity 205
thermally generated current 204
vs photoconductive detector 194
photovoltaic detectors
energy bands 198
physical and mathematical constants 376
pixel 268
irradiance in an image 268–271
signal magnitude 268
Planck
constant 22
®
exitance function Matlab 412
exitance function Python™ 412
law 57–65
constants 377
derivative exitance 60–62
exitance 60–62
integrated 63
maximum 62
summary 65

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Planck (Cont.)
summation approximation 64
radiator 59 65
Planckian locus 49
plume 103
effective transmittance 106
surface radiator 104
volume radiator 104
p-n diode, see photovoltaic detector
p-n junction 194
point spread function (PSF) 235 260
point target 232
Poisson statistics 144
power spectral density (PSD) 141–142
1/ f noise 142 145
band-limited noise 142
combining spectra 149
generation–recombination (g-r) noise 145
Johnson noise 143
shot noise 143
temperature-fluctuation noise 145
white noise 142
principal plane 224
probability of detection 259
probability of false detection 260
prototype, see lifecycle phases
p-type material 172
hole concentration 174
pulse detection 272–275
®
calculation in Matlab 436
calculation in Python™ 436
false alarm rate 272–275

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

pupil 226–230
diameter 229 230
pyradi toolkit 411
pyroelectric detector 157–159
noise 159
responsivity 159
structure 158
Python™ 409

quanta 20
quantum efficiency 139 181 182
external 181
anti-reflection coatings 182
reflection 181
internal 181
photoconductive detector 187
quantum well detector (QWIP), see
photon detector

1/R2 losses 311–314


radiance 24
atmospheric path 283
basic 37
conservation 35–37
Lambertian source 41
luminous (luminance) 25
photon 25
radiant 25

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

radiance (Cont.)
reflected
ambient 283
sky 283
solar 283
self-emitted 281
signature model 279–283
spatial invariance 36
transfer 35–41
transmitted background 283
radiative transfer equation (RTE) 97 101
radiator
gaseous 70 103
grey body 71
Planck 59 65
selective 71
surface 104
thermal 285–292
volume 104
radiometer measurements
atmospheric correction 267
spectral radiance 287
radiometric quantities 375
radiometry 22
definition xxv
nomenclature 23
quantities 24
techniques 255–276
range equation 267
solved in Python™ 435

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

ray
chief 224
marginal 224
tracing 225
Rayleigh scattering 115
reductionism xxiii
reflectance
bidirectional 76 80–83
cavity 74
diffuse 76 81 82
directional 75–83
in nature 85
Fresnel 77–79
geometry 77
high 73
Lambertian 76 81
material property 27
mirror 81
Snell’s law 77 79 405
specular 76 82
refractive index, see index of refraction
relative humidity (RH) 123
relative luminous efficiency 46
photopic 46
scotopic 46
rendering 387–398
aliasing 391 396–398
rasterization 391
priority fill algorithm 391
side-effects 393
z-buffering 391
super-sampling 392 396–398

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

requirement allocation 8
response
eye 46
filter 223 240
frequency 150
complex valued optical 236
photoconductive detector 190–191
photovoltaic detector 202–203
impulse 235 260
normalizing 140
spatial frequency 260
spectral weighting 106 244 263–264
system 246
thermal detector 136
unlimited 161
responsivity
bolometer 156
normalized 140 243
peak 140 243
photoconductive detector 189
photon detector 139
pyroelectric detector 159
spectral 140 243
thermal detector 136 152–154
thermoelectric detector 161
reverse-bias operation 198
reverse-bias-saturation current 197 319
review, see design
root-mean-square (rms) 257

scanning efficiency 330

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

scattering
atmosphere 112
aerosols 112
attenuation coefficient 99 110
Mie 116 134
Rayleigh 115 134
scattering modes 114
Schrödinger equation 169
scotopic
efficacy 47
efficiency 47
luminance 47
relative spectral efficiency 378
vision 46
Seebeck coefficient 160
selective radiator, see gaseous radiator
semiconductors
current flow 179
carrier diffusion 179
carrier drift 180
charge mobility 180
diffusion constant 180
diffusion current 180
diffusion current density 180
drift current 180
drift current density 180
energy bands 171
structure 169–170
extrinsic materials 171
concentrations 173
examples 174
Fermi energy level 173

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

semiconductors (Cont.)
Fermi–Dirac distributions 173
intrinsic materials 171
concentrations 173
examples 173
Fermi energy level 173
intrinsic carrier concentration 174
light absorption 176–178
material parameters 379–380
Schrödinger equation 169
silicon lattice 172
wave equation 176
sensor 14
aperture stop 222
field stop 223
noise model 330–334
optical
elements 222
model 240
throughput 248–250
®
optimization worked example Matlab 459
radiometric model 242–245 337–344
complex source 245
detector signal 242
source area variations 244
signal calculations 242–245
complex source 245
detector 242
source area variations 244
solid angle
field of view 230
flux-collecting 230

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

sensor (Cont.)
spatial angles 230
spectral
filter 223
response 223 243
stops/baffle 223
terminology 221–223
window 222
worked example 450
sharing xxiii
short-circuit operation 198 202
short-wave infrared (SWIR) 65
shot noise 143–144
interface electronics 146
photovoltaic detectors 203 204
power spectral density 143
signal 256
reference planes 245
electronics plane 246
image plane 246
object plane 246
optics plane 246
voltage 243
signal-to-clutter ratio (SCR) 257
signal-to-noise ratio (SNR) 257
signature
model 279–283
atmospheric path radiance 283
BRDF 284
equation 281
main contributors 280
reflected ambient radiance 283

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

signature (Cont.)
reflected sky radiance 283
reflected solar radiance 283
self-emitted radiance 281
spatial properties 279
terminology 282
thermal radiator 285–292
transmitted background radiance 283
reflected vs emitted contribution 283
rendering 387
thermal radiation from common
objects 65
silicon detector 139
simulation 385–401
knowledge management 386
validation 386
sky radiance 283 398–401
Snell’s law 79 176 403–405
solar cell analysis 315–321
configuration 318
experimental measurement 315
model 319–321
radiometry 317
solid angles 316
source areas 316
solid angle 28–35
approximation 33
worked example Matlab® 441
Cassegrain telescope example 448
field of view 230
flux collecting 230
geometric 28

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

solid angle (Cont.)


cone 29
flat rectangular surface 32
projected 29 41
cone 31
flat rectangular surface 32
sphere 34
sensor 230
source area 366
source 14
gaseous 103
Lambertian 41–42
shape, see Lambertian source, shape
space domain 256
spatial
integral 38 407
®
calculation in Matlab 437
view factor 43
specific detectivity 148 183 184
205 258
photon-noise-limited thermal detector 161
specification hierarchy 8
specifications 8–10
spectral
band (NIR, MWIR, SWIR, LWIR) 65
calculations 264–267
convolution 265–267
detector function 415–417
®
in Matlab 415
in Python™ 416
domains 25

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

spectral (Cont.)
emissivity 71
measurement 288
filter 223 240
filter function 413–415
®
in Matlab 413
in Python™ 414
filtering 39
integral 407
integration, summation 26
mismatch 264
quantities 25
conversion 26
density 25
response
eye 46
filter 240
photon and thermal detectors 138
sensor 223 246
responsivity 243
weighting 106 244 263–264
spectroradiometer 287
specular reflectance 76 82
Stefan–Boltzmann law 63
stopband 240
subsystem 2
sun 86
area 316
geometry factor 87
glint 302
reflected radiance 283 398–401
surface radiator 104

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

surface roughness 75
scale 76
system 2
acceptance, see lifecycle phases
context 1
engineering 2
noise 245–247
performance measures 255–261
segment, see subsystem
source–medium–sensor model 242–245
V-chart 8

target
extended 268
point 268
technical performance measure (TPM) 10
telescope
Cassegrain 236
Gregorian 236
temperature
apparent 73
cross-over 300
estimation of a flame 290–292
minimum detectable 259
minimum resolvable 259
noncontact measurement 73
radiation 73
temperature-fluctuation noise 145–146
flux 146
power spectral density 145

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

thermal detector 136–138 151–163


bolometer 155–157
conceptual model 152
noise, see noise
overview 151–152
photon-noise-limited 161–163
noise equivalent temperature
difference (NETD) 163
specific detectivity 161–163
pyroelectric 157–159
responsivity 136 152–154
temperature-fluctuation-noise-limited 163
noise equivalent temperature
difference (NETD) 163
specific detectivity 163
thermoelectric 159–161
thermal imager
sensitivity 334–337
sensor model 330–334
assumptions 330
electronic parameters 330
example calculation 333
flux on the detector 337–339
focused optics 339–342
noise 331–333
out-of-focus optics 342–344
thermal radiator, see grey body, see
Planck radiator
white point 49 92
thermal radiator model 285–292
area estimation 288
emissivity estimation 287

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

thermal radiator model (Cont.)


process 287
temperature estimation 290
thermally transparent paint 301
thermocouple
equation 161
gas measurement 354
thermodynamic equilibrium 57
thermoelectric coolers 186–187
thermoelectric detector 159–161
layout 160
noise 161
responsivity 161
thick lens 225–227
thin-lens approximation 221 224–227
throughput 248–250
time domain 256
time-bandwidth product 150
transfer function
modulation (MTF) 236 260
optical (OTF) 236 260
transmittance
atmospheric windows 116
LWIR band 117
MWIR band 117
NIR band 117
visual band 116
background 283
Bouguer’s law 98
contrast 103
effective 105
filter 240

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

transmittance (Cont.)
homogeneous medium 98
inhomogeneous medium 99
material property 27
medium 38 98
range 108
two-flux Kubelka–Munk 100

up/down atmospheric radiance 121


Urbach tail 177

valence band 168


validation 275 386
value system 11
Varshni approximation 139
V-chart, see system
vignetting 222 227
collimator beam 238 239
control of 228
in practical design 341–342
visibility, see meteorological range
vision
mesopic 46
photopic 46
scotopic 46
visual spectral band
atmospheric window 116
contrast transmittance 125
path radiance 118

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

volume radiator 104

wave model
electronic 166
Bloch functions 170
field strength 176
velocity 176
wave equation 176
light 20
wave packet 22
wavefront 20
wavelength 20
cutoff 138
relation to frequency 20
relation to wavenumber 26
spectral density conversion 26
wavenumber 25
website xxv 411
white noise 142
white point 49 92
Wien’s displacement law 62–63 67

zero field angle, see optical axis

This page has been reformatted by Knovel to provide easier navigation.

You might also like