(Gnss Technology and Applications) Thomas Pany - Navigation Signal Processing For GNSS Software Receivers - Artech House Publishers (2010)
(Gnss Technology and Applications) Thomas Pany - Navigation Signal Processing For GNSS Software Receivers - Artech House Publishers (2010)
(Gnss Technology and Applications) Thomas Pany - Navigation Signal Processing For GNSS Software Receivers - Artech House Publishers (2010)
A catalog record for this book is available from the U.S. Library of Congress.
ISBN-13: 978-1-60807-027-5
All rights reserved. Printed and bound in the United States of America. No part of this book
may be reproduced or utilized in any form or by any means, electronic or mechanical, includ
ing photocopying, recording, or by any information storage and retrieval system, without
permission in writing from the publisher.
All terms mentioned in this book that are known to be trademarks or service marks have
been appropriately capitalized. Artech House cannot attest to the accuracy of this informa
tion. Use of a term in this book should not be regarded as affecting the validity of any trade
mark or service mark.
10 9 8 7 6 5 4 3 2 1
Contents
Preface xiii
Acknowledgments xvii
Chapter 1
Radio Navigation Signals 1
1.1 Signal Generation 1
1.2 Signal Propagation 2
1.3 Signal Conditioning 3
1.4 Motivation for a Generic Signal Model 4
1.5 Sampling 5
1.6 Deterministic Received Signal Model 6
1.7 Stochastic Noise Model 6
1.8 Short-Period Signal Model 7
1.8.1 Zeroth-Order Moment of Signal Power 8
1.8.2 First-Order Moment of Signal Power 8
1.8.3 Second-Order Moment of Signal Power 9
1.8.4 First-Order Moment of Signal Power Variations 9
1.8.5 Separation of Code and Carrier Correlation 10
1.9 Exemplary Signals 11
1.9.1 A Model for the GPS C/A-Code Signal 11
1.9.2 A Model for the Galileo E1 Open-Service Signal 13
1.9.3 Pulsed GNSS Signals 14
1.9.4 Gaussian Double Pulse 15
References 16
Chapter 2
Software-Defined Radio 17
2.1 Definitions 17
2.2 Communication Radios 19
2.2.1 GNU Radio 19
2.2.2 Joint Tactical Radio System 19
2.3 GNSS Software Receivers 22
2.3.1 Front Ends 22
2.3.2 Illustrative Applications 25
2.3.3 High-End GNSS Software Receivers 28
2.4 Technology Evaluation and Discussion 30
References 30
Contents vii
Chapter 5
Signal Detection 129
5.1 Detection Principles 129
5.1.1 Simple Hypothesis Testing 130
5.1.2 Composite Hypothesis Testing 131
5.2 Detection Domains 133
5.2.1 Pseudorange Domain Detection 133
5.2.2 Position Domain Detection 133
5.3 Preprocessing 133
5.4 Clairvoyant Detector for Uniformly Distributed Phase 134
5.5 Energy Detector 137
5.6 Bayesian Detector 138
5.7 Generalized Likelihood-Ratio Detector 140
5.7.1 Single Coherent Integration 141
5.7.2 Multiple Coherent Integrations 142
5.7.3 Considering Navigation Signal Interference 147
5.7.4 Data and Pilot 149
5.8 System-Detection Performance 154
5.8.1 Idealized Assumptions 155
5.8.2 Mean Acquisition Time 155
viii Contents
Chapter 6
Sample Preprocessing 163
6.1 ADC Quantization 163
6.1.1 Quantization Rule 163
6.1.2 Matched Filter 165
6.1.3 Evaluation of Expected Values 167
6.1.4 Infinite Number of Bits 169
6.1.5 Numerical Evaluation 170
6.2 Noise-Floor Determination 174
6.3 ADC Requirements for Pulse Blanking 174
6.3.1 Front-End Gain and Recovery Time 175
6.3.2 Pulse Blanking 175
6.3.3 ADC Resolution 176
6.4 Handling Colored Noise 178
6.4.1 Spectral Whitening 178
6.4.2 Modified Reference Signals 179
6.4.3 Overcompensation of the Incoming Signal 180
6.4.4 Implementation Issues 180
6.5 Sub-Nyquist Sampling 180
References 182
Chapter 7
Correlators 185
7.1 Correlator and Waveform-Based Tracking 185
7.2 Generic Correlator 187
7.2.1 Expected Value 188
7.2.2 Covariance 189
7.2.3 Variance 191
7.3 Correlator Types with Illustration 191
7.3.1 P-Correlator 192
7.3.2 F-Correlator 193
7.3.3 D-Correlator 194
7.3.4 W-Correlator 194
7.4 Difference Correlators 197
7.4.1 Single-Difference P-Correlators 197
7.4.2 Double-Difference P-Correlators 199
7.5 Noisy Reference Signal for Codeless Tracking 200
7.5.1 Expected Value 202
7.5.2 Covariance 202
Contents ix
Chapter 8
Discriminators 217
8.1 Noncoherent Discriminators 217
8.1.1 Code Discriminator 217
8.1.2 Doppler Discriminator 221
8.1.3 Phase Discriminator 223
8.1.4 Clipping 225
8.2 S-Curve Shaping 225
8.2.1 Code-Discriminator Performance Characteristics 226
8.2.2 Optimum S-Curve 227
8.2.3 Frequency-Domain S-Curve Shaping 228
8.2.4 Discussion 231
8.3 Multipath Estimating Techniques 231
8.3.1 The LSQ Equations 232
8.3.2 Calibration 235
8.3.3 General Procedure 235
8.3.4 Correlator Placement 236
8.3.5 Initial Values 236
8.3.6 Number of Required Iterations 237
8.3.7 Multipath Detection 237
8.3.8 Discussion 238
8.4 From Discriminator Noise to Position Accuracy 238
References 239
Chapter 9
Receiver Core Operations 241
9.1 Test-System Configuration 241
9.2 Signal-Sample Bit Conversion 242
9.2.1 Algorithm 243
9.2.2 Numerical Performance 244
9.2.3 Discussion and Other Algorithms 245
9.3 Resampling 245
9.3.1 Algorithm 245
9.3.2 Numerical Performance 245
9.3.3 NCO Resolution 246
9.3.4 Discussion and Other Algorithms 248
9.4 Correlators 248
9.4.1 SDR Implementation 249
Contents
Chapter 10
GNSS SDR RTK System Concept 277
10.1 Technology Enablers 277
10.1.1 Ultra-Mobile PCs 277
10.1.2 Cost-Effective High-Rate Data Links 278
10.2 System Overview 279
10.2.1 Setup 279
10.2.2 Sample Applications 280
10.2.3 Test Installation and Used Signals 280
10.3 Key Algorithms and Components 281
10.4 High-Sensitivity Acquisition Engine 281
10.4.1 Doppler Search Space 282
10.4.2 Correlation Method 284
10.4.3 Clock Stability 284
10.4.4 Line-of-Sight Dynamics 287
10.4.5 Flow Diagram and FFT Algorithms 287
10.4.6 Acquisition Time 288
10.5 Assisted Tracking 289
10.5.1 Vector-Hold Tracking 290
10.5.2 Double-Difference Correlator 291
10.6 Low-Cost Pseudolites 297
10.6.1 Continuous-Time Signals 299
10.6.2 Pulsed Signals 299
10.7 RTK Engine 304
References 305
Chapter 11
Exemplary Source Code 307
11.1 Intended Use 307
Contents xi
Appendix
A.1 Complex Least-Squares Adjustment 311
A.1.1 Definitions 311
A.1.2 Probability Density Function 312
A.1.3 The Adjustment 312
A.1.4 Real- and Complex-Valued Estimated Parameters 314
A.1.5 A Posteriori Variance of Unit Weight 315
A.1.6 Example 318
A.1.7 Discussion 320
A.2 Representing Digital GNSS Signals 320
A.2.1 Complex-Valued Input Signal 320
A.2.2 Real-Valued Input Signal 321
A.2.3 Comparing Real- and Complex-Valued Signals 322
A.3 Correlation Function Invariance 326
A.4 Useful Formulas 329
A.4.1 Fourier Transform 329
A.4.2 Correlation Function 331
A.4.3 Correlation with an Auxiliary Function 332
A.4.4 Correlation with Doppler 333
A.4.5 Correlation in Continuous Time 334
A.4.6 Probability Density Functions 336
References 338
Abbreviations 339
List of Symbols 343
About the Author 345
Index 347
Preface
xiii
xiv Preface
treatment also gives hints for optimal algorithms; useful examples that are discussed
are spectral whitening and the least-squares-based multipath-estimating discrimina-
tor. Efficient algorithms are found in the frequency domain for signal acquisition,
which itself would justify the effort of going into theoretical details. Furthermore,
the theoretical analysis points out that new developments could be expected in the
field of direct-position estimation (in a single-step procedure, instead of estimating
a position via pseudoranges), which should give advantages in terms of interference
robustness and sensitivity. Sensitivity might be further increased by using Bayesian
techniques (like a particle filter) that do not rely on a linearized signal model.
Unfortunately, the existing navigation signal-processing theory has limits and
does not always provide an optimal algorithm for detection or estimation. Ex
amples are the nonexistence of a uniform most powerful detector for acquisition and
the nonexistence of a minimum variance unbiased code-phase or Doppler estimator
for finitely received signal power. In addition, the practical usability of Bayesian
techniques within signal processing (apart from the Kalman filter) is not completely
assessed. Overall, it seems that a theoretically optimal navigation receiver is out-of-
reach today, even if only signal processing is considered. However, software radio
technology closes the gap between existing theory and real implementation.
Overview
Within this text, the navigation signal processing theory is described for generic navi-
gation signals to allow a broad range of applications, beyond that of GNSS. Require-
ments for navigation signals are introduced in Chapter 1 and are illustrated with one
GPS, one Galileo, and two pulsed signals. Software-defined radio technology will be
introduced in Chapter 2, together with the architecture and the data flow of a per-
manent GNSS reference station in Chapter 3. Chapters 4 and 5 focus on theoretical
signal-processing aspects and Chapters 6 through 9 shift the focus to implementation.
An innovative high-precision software radio concept is presented in Chapter 10 using
double-difference correlators, in addition to double-difference pseudorange and carrier-
phase observations to increase carrier-phase tracking stability for real-time kinematic
applications. Finally, on the Artech House Web site, www.artechhouse.com, this book
has some MATLAB and assembler programs that illustrate the core signal-processing
concepts of a navigation receiver. Chapter 11 describes this software.
This work would not have been possible without the support from numerous co-
workers and colleagues. I am especially grateful to the researchers of the University
of Federal Armed Forces in Munich, to the researchers at the IFEN GmbH, and to
many colleagues from research institutes from all over the world.
I am grateful to Professor Günter W. Hein for continually encouraging me to
enter this field, for his uninterrupted belief in technology, and for showing me ways
of going beyond limits. Professor Bernd Eissfeller established the basis for GNSS
receiver technology research at the Institute of Geodesy and Navigation. His con-
tribution to this work cannot be overvalued. I would also like to thank Professor
Jörn Thielecke for fruitful discussions. With his knowledge on communication and
navigation signal processing, he showed me several important links between both
fields.
xvii
Radio Navigation Signals
Here, fRF is the carrier frequency in hertz, c (t) is the infinite bandwidth signal
representation at baseband, and d(t) represents a broadcast navigation message.
The transmitted signal is described in the signal-in-space (SIS) interface control doc-
ument (ICD) [1, 2]. The symbol aS denotes the signal amplitude in arbitrary units.
Other signals might be transmitted on the same carrier frequency by the same
satellite. For example, the Global Positioning System (GPS) C/A signal is broadcast
in phase quadrature with the P(Y) signal on the L1 (1,575.42 MHz) carrier fre-
quency. The access to the different signals is controlled via c (t). Different wave-
forms of c∞(t) can be used to realize code division multiple access (CDMA), time
division multiple access (TDMA), or—by including a carrier into c (t)—frequency
division multiple access (FDMA) schemes.
With the combined filter, the signal is mathematically written as
with
and similarly for d(t). The phase delay, τP, affects the carrier via
with
The coefficient αD
v
αD = 1 - (1.8)
c
is the Doppler effect, caused by the change in the group/phase delay, expressed as
velocity v in meters per second. The linearization is carried out around the epoch t0.
If the signal duration Tcoh under consideration is short, the Doppler effect on c(t),
d(t) can be ignored, as in
c
>> Tcoh Þ c(α Dt - τ G,0 ) » c(t - τ (t0 )) (1.9)
vB
and similarly for d(t), where B denotes the signal (or data message) bandwidth in
hertz.
After signal reception by the antenna, the signal is amplified, filtered, and eventually
downconverted. Amplification changes the amplitude of the signal (from aS to a),
but leaves the signal structure invariant.
Radio Navigation Signals
The frontend filter limits the bandwidth of the received navigation signals and
of the received noise. It also rejects out-of-band signals. The frontend filter is typi-
cally of lower bandwidth than the output filter and neglecting the output filter is a
reasonable approximation. The resulting filter is a band pass filter and is described
by its baseband equivalent H via
For signal estimation and detection, it is largely irrelevant at which center fre-
quency the filter operates; it can be placed at the RF, at the IF, or at baseband. Nor-
mally, filters with discrete components or SAW filters operating at the IF are used,
but discrete polyphase filters at baseband have also been utilized. Global navigation
satellite system (GNSS) receivers normally do not integrate the filter into a chip
solution.
Downconversion changes the signal carrier model by
arrives at the front end’s ADC(s). The ADC(s) either quantize(s) the real and the
imaginary part of the signal or quantizes only one of them (see Appendix A.2).
pulsed signals are included because they are being used by pseudolites, by LORAN-C,
or RADAR-like ranging systems. In principle, the theory can also be adapted for
sonar ranging systems and, with some limitations, for optical ranging systems. Only
one frequency band is considered (e.g., GPS L1); a generalization from a sampling
of one frequency band to multiple frequency bands is obvious, but one should be
take care that all bands are sampled synchronously.
No assumption on the relation of the signal bandwidth to the sampling rate is
made and, in particular, sub-Nyquist sampling rates can be used. No assumption on
the modulation scheme is made as long as the received signal waveform at baseband
c(t) is known a priori to the receiver. Eventually, filters influence the theory via the
waveform c(t). The more narrow the filter bandwidth, the smoother the waveform
will be.
The filter bandwidth and characteristics define how much noise power is be-
ing received: the wider the bandwidth, the higher the noise power. For simplicity, a
unity noise power is assumed in (1.16) and it is important to keep in mind that only
ratios between power levels have meaningful values, as described in Section 1.8.1.
In that sense, (1.16) defines the power scale.
The Nyquist criterion does not fully apply because the waveform c(t) is known
to the receiver beforehand. Consequently, there is no need to reconstruct the signal
waveform from the received samples [5]. As shown in Section 6.5, a good choice
for the sample rate is exactly equal to the Nyquist rate (e.g., being equal to the noise
bandwidth). Lower sample rates yield less-independent signal samples, thereby gen-
erally decreasing the accuracy of the obtained estimates. The accuracy decrease can
be modeled as an effective signal power loss. This observation is also true when
multiple reflections of the same signal are received.
In the rest of this chapter, we will formulate generic conditions for navigations
signals. Later, we illustrate the conditions with two GNSS signals and two pulsed
terrestrial navigation signals. In addition to those parameters, the number of re-
ceived signals, the amplitude, and the geometric placement of the transmitters affect
the positioning result (see Sections 4.1.3 and 8.4).
1.5 Sampling
Sµ = rµ + Nµ = å rm; µ + Nµ (1.15)
m
The deterministic part is the sum of all received signals rm;m broadcast from one
or more transmitters propagating along one or more paths; for example, the index
m distinguishes not only the different emitters, but also the different propagation
paths. Furthermore, m may distinguish different signal components (e.g., data and
pilot signals) broadcast by the same transmitter.
The baseline assumption for the stochastic component of the received signal is to
model it as complex-valued uncorrelated white noise having unity variance in each
component
N µ Nυ = 0, Nµ N υ = 0, Nµ Nυ = 2δ µ,υ (1.16)
N N N
The white noise originates from the received and internally generated noise.
In general, it is assumed that the amplitudes of the individual noise random
variables Nm are Gaussian distributed. However, important results described in the
following chapters also hold for arbitrary amplitude distributions of Nm, provided
that (1.16) and (1.17) hold. A common example of a non-Gaussian noise distribu-
tion is the ADC quantization noise, which will be discussed in Section 6.1.
It is important to recognize that the comparably simple white-noise spectral
characteristic is sufficient to model many navigation receiver frontends, as long as
the sample rate is properly chosen (see Section 6.5). In fact, if (1.16) is not fulfilled,
which occurs when oversampling is employed and results in nonwhite noise, then
the operation of spectral whitening can be applied as described in Section 6.4. The
operation of spectral whitening yields uncorrelated noise samples that allow the
operation to work with the simple noise model (1.16) and simultaneously reduces
1.8 Short-Period Signal Model
ïì æ æ L + 1ö ö ïü
rµ = ac(tµ - τ )exp íi çω ç tµ - ÷ - ϕ÷ ý (1.18)
îï è è 2 fs ø ø ïþ
Here, a denotes the signal amplitude in arbitrary units, τ is the delay (or code
phase) of the signal in seconds, j is the carrier phase of the signal in radians, and ω
is the angular frequency plus Doppler in radians per second. All values are instan-
taneous values and refer to a particular interval. The relation of the values between
the different intervals will not be specified here.
The signal amplitude a is a measure for the received signal power and the delay
τ relates to the geometric distance between transmitter and receiver. The carrier
phase j contains information of the geometric distance, but is also used to ac-
commodate a possible broadcast navigation message. The angular frequency ω is
given by the nominal frequency (being either zero, an intermediate frequency, or
the nominal carrier frequency) plus a Doppler offset caused by the relative velocity
between transmitter and receiver.
The signal c(t) is the baseband representation of the broadcast navigation signal
waveform (eventually complex-valued). In the example of a GPS C/A-code transmit-
ter, it is a 1-ms pseudorandom noise signal using a BPSK modulation scheme. The
signal c(t) is affected by filters located on either the transmitter or the receiver side.
It is important to recognize that the signal c(t) can be quite arbitrary and could
assume a CDMA spreading code or a pulsed waveform. The important requirement,
however, is that the waveform c(t) must be known to the receiver before receiving
it. Additionally, five more requirements are formulated next. Three of these require-
ments, provided in Sections 1.8.1 through 1.8.3, are mostly of a formal nature (i.e.,
they relate to the definitions of constants and interval boundaries) and pose only few
constraints on the waveform itself. The other two requirements, given in Sections
1.8.4 and 1.8.5, are more strict and need to be fulfilled if the signal will allow sepa-
ration of Doppler estimates from delay estimates. The two conditions given in Sec-
tions 1.8.1 and 1.8.4 are extended to be compatible with the used estimation scheme
(complex least-squares adjustment). The extensions are trivially fulfilled if the sig-
nal c(t) is real-valued and the extensions ensure that the code phase, the Doppler,
Radio Navigation Signals
and the complex amplitude estimates are uncorrelated in the event that only a line-
of-sight signal is present.
It should be noted that the requirements formulated next need to be fulfilled
sufficiently, but not necessarily exactly. Overall, they ensure that the Fisher informa-
tion matrix (4.60) can be sufficiently approximated by a diagonal matrix.
1 L 2
å c(t µ - τ ) » 1
L µ =1
(1.19)
This is required so that only the signal amplitude a contains information of the
received signal power and the waveform c(t) is assumed to be independent from the ac-
tual received power. This equation is approximately valid for all τ values of interest.
The numerical value of a itself is meaningless as it is expressed in arbitrary
units. However, it can be related to the ratio between signal power and noise power
spectral density C/N0, which is described in detail in Appendix A.2. Because of
(1.19), the following equation holds:
fs a2
C/N0 = (1.20)
2
The value of C/N0 is expressed in hertz.
Equation (1.19) implies that
ìï 1 L üï
Re í å c(t µ - τ ) c ¢(t µ - τ)ý » 0 (1.21)
L
îï µ =1 ïþ
according to Appendix A.4.2. Note, c denotes the first derivative of the waveform
c. To avoid correlations between the imaginary part of the code phase estimate with
the complex signal amplitude, it is required that the imaginary part of the above
expression vanishes; overall, we require
1 L
å c(t µ - τ ) c¢(tµ - τ) » 0
L µ =1
(1.22)
This requirement ensures that estimates for the angular frequency and the car-
rier phase are uncorrelated with each other, which will be shown in Section 4.3.2.
This equation is approximately valid for all τ values of interest. The requirement
is fulfilled if the signal power is located symmetrically in time with respect to the
midpoint of the interval.
The constant χfreq measures the nonuniformity of the signal power distribution
in time. If the signal power is constant during time (i.e., |c(t)|2 = 1), then χfreq = 1.
This constant will be much smaller than 1, causing a reduced frequency estimation
accuracy, especially for single pulses. This equation is approximately valid for all τ
values of interest.
and, additionally, it is required that the imaginary part vanishes; overall, it is re-
quired that
L
æ L + 1ö
å çè t µ - 2 fs ÷ø
c(tµ - τ ) c ¢(tµ - τ ) » 0 (1.27)
µ =1
This requirement ensures that Doppler estimates and code phase estimates are
uncorrelated, which will be shown in Section 4.3.2. This equation is approximately
valid for all τ values of interest. This requirement is nontrivial and might not hold
for certain waveforms c(t). An example for which it does not hold occurs if c(t) as-
sumes a single Gaussian bell-shaped curve. In that case, complex-valued delay and
complex-valued Doppler estimates are totally correlated.
10 Radio Navigation Signals
L ìï æ L + 1ö üï
å c1(tµ - τ )c2 (tµ )exp íïiω çè tµ - ý » Lκ (ω )Rc1,c2 (τ )
2 fs ÷ø þï
(1.28)
µ =1 î
is approximately fulfilled for a properly chosen function κ(ω), with κ(0) = 1. Equa-
tion (1.28) is fulfilled for all signals occurring in a certain navigation problem. The
function κ(ω) is universal and applies to transmitted, received, or internally gener-
ated signal pairs (e.g., baseband signal plus P-, D-, F-, or W-correlator reference
signals of Chapter 7) and to any combination of them.
In the case of a timely uniform distributed signal power |c1(t)| = |c2(t)| = const.,
the function κ(ω) is given as
æ ωT ö
æ
i èe -iωTcoh 2
- eiωTcoh 2ø
ö 2sin ç coh ÷
è 2 ø æ ωT ö
κ (ω) » = = sinc ç coh ÷ (1.29)
ωTcoh ωTcoh è 2 ø
æ ¶ öæ L 2 ìï æ L + 1ö üïö
çè - i ÷ø ç å c(t µ - τ ) exp íiω ç t µ - ý÷
2 fs ÷ø þïø
ω =0
¶ ω è µ =1 îï è
L
2æ L + 1ö (1.30)
= å c(t µ - τ ) çè tµ - 2 f ÷ø » -iLκ ¢(0) » 0
s
µ =1
Þ κ ¢(0) » 0
¶ ö æ L L + 1ö ïüö
2
æ 2 ìï æ
çè - i ÷ ç å
¶ω ø è µ =1
c(t µ - τ ) exp í i
ïî è
ω ç tµ - ý÷
2 fs ÷ø ïþø
ω =0
L 2
2æ L + 1ö L3 - L
= å µ c(t - τ ) t
çè µ -
2 fs ÷ø
» - Lκ ¢¢(0) » χ freq
12 fs2
(1.31)
µ =1
L2 - 1
Þ κ ¢¢(0) » - χ freq
12 fs2
1.9 Exemplary Signals 11
κ (0) » 1
κ ¢(0) » 0
(1.32)
1
κ ¢¢(0) » - χ freqTcoh2
12
This section illustrates the generic signal model with four types of navigation sig-
nals. Table 1.2 summarizes signal parameters of the four exemplary navigation
signals. For each signal, we chose a typical coherent integration time. The pulsed
open-service (OS) pilot signal is introduced in detail in Section 1.9.3 and Chapter
10 and uses TR = 100 ms and TP = 4 ms.
The elements of the spreading code sequence cn assume values of –1 and +1 and
the sequence is periodic with a period of NPRN = 1,023,
cn = cn + mNPRN m ÎZ (1.34)
The navigation message amplitude d(t) assumes values of –1 and +1 and re-
mains constant over the 20-ms bit duration. Spreading code sequences and bits are
synchronized.
The signal chip waveform is given by
ì1 0£ n<1
p¥ (n) = í (1.35)
î0 otherwise
As with any PRN code sequence, the C/A-code sequence can be modeled by
a sequence of uniform distributed, binary (–1 and +1), and independent random
variables with
CnCm C
= δ n, m (1.36)
This model is only an approximation to the signal. Especially for short codes
like the C/A code, significant deviations (nonvanishing cross-correlation and side
autocorrelation peaks) may occur. However, (1.36) is useful because it allows a
uniform treatment of the entire GPS C/A-code signal family, independent from the
PRN code number.
A filter affects the single-chip waveform and leaves the spreading code sequence
untouched,
¥ ¥
H(c¥ (t)) = å cn H(p¥ (tfc - n)) = å cn p(tfc - n) (1.37)
n =-¥ n =-¥
The filtered single-chip waveform p(t) must be normalized to fulfill the require-
ment of Section 1.8.1. Using (1.36), the correlation function of the signal can be
simplified as
¥
1 L 1 L
Rc ,c (τ ) = å
L µ =1
c(t µ - τ ) c(t µ ) = å å cm p(tµ fc - m - τ ) cn p(t µ fc - n) »
L µ =1 m,n =-¥
¥
1 L
» å å
L µ =1 m,n =-¥
Cm p(t µ fc - m - τ ) Cn p(tµ fc - n)
C
¥
1 L
= å å
L µ =1 m,n =-¥
δ n,m p(t µ fc - m - τ ) p(t µ fc - n) (1.38)
1 ¥ L
= å å p(t µ fc - n - τ ) p(tµ fc - n)
L n =-¥ µ =1
tL fc -1 tL fc -1
1 L
» å å p(t µ fc - n - τ ) p(t µ fc - n) »
L µ =1
å Rp, p (τ ) = tL fc Rp, p (τ )
n=0 n=0
Software-Defined Radio
2.1 Definitions
17
18 Software-Defined Radio
digital sampling. It is known that for direct sampling of a broad frequency band
(e.g., 0.8–2.2 GHz, which contains all digital civilian mobile phone communica-
tion standards) not only is a high sampling rate (e.g., 4.4 GHz) required, a high bit
resolution to control signal dynamics of all signals in the frequency range is also
required. Assuming a constant sampling rate, the ADC-bit resolution increases by
1.5 bits every 8 years [4]. It is therefore expected that in future SDRs, only limited
parts of the RF spectrum will be sampled [3]. Furthermore, power consumption of
the ADC and of the subsequent signal-processing units would increase dramatically
if they had to operate for a very broad frequency band.
After the signal has been converted to its digital form, it is processed by the
radio. Digital signal-processing elements in a SDR exist in configurable ASICs, field-
programmable gate arrays (FPGAs), digital signal processors (DSPs), and general
purpose processors (GPPs). Note that this work focuses on the receive function of a
radio, but that the discussion also applies to the transmission path if the data flow
is reversed.
Reconfigurable ASICs are related to the technique called PaC-SDR. For PaC-
SDR, several communication standards are investigated with respect to their
similarities and differences. Common algorithms are designed, which can then be
configured using different parameter sets to work for the investigated communi-
cation standards. These algorithms might then be realized as ASICs or software
modules. PaC-SDR implies that the standards under consideration do not differ
completely. PaC-SDR can be considered as the least flexible form of a SDR as in-
field software updates are not possible. In contrast, modular SDR (Mod-SDR) is
a technique to run the required algorithms as software modules on exchangeable
hardware (i.e., on a processor or on FPGAs). This not only allows reconfiguration
of the algorithms with different parameters, but also allows complete reorganiza-
tion or updating of the algorithm. The main problem with Mod-SDR is that the
software and the underlying hardware need to be compatible. Wiesler discusses
three methods to run Mod-SDR software [5]:
In the context of GNSS SDR, the third solution dominates although some ef-
forts have been undertaken to realize a GNSS radio under the JTRS SCA [6] or
as a GNU radio [7]. It is also known that GNSS SDRs often serve as prototype
platforms for later ASIC designs or the investigation of different signal-processing
algorithms. The use of prototype SDRs is common in research and development
but is usually not relevant for deployed radios. Overall, the term SDR covers an
enormous number of techniques.
offers the possibility of reprogramming radios in the field, allowing changes in cryp-
tographic methods to be performed easily. Of importance is the potential to form
ad-hoc networks in the field that allow the exchange of various data (e.g., speech,
text, maps) between soldiers and the command center.
Several programs exist in different nations to replace legacy radios with SDRs.
Due to the size of those programs, great efforts have been undertaken to standard-
ize and coordinate the development efforts. The US Department of Defense (DoD)
program Joint Tactical Radio System (JTRS), briefly introduced in Section 2.3, is
based on the work of the Joint Program Executive Office for the JTRS [2, 11]. It is
important to note that other programs exist as well [12].
The JTRS program was initiated in early 1997 to replace existing legacy radios
in the DoD inventory. It evolved from separate radio replacement programs to an
integrated effort to network multiple weapon system platforms focusing especially
on the last tactical mile. JTRS is intended to link the global information grid (GIG)
to the military personnel. This goal would be achieved by developing a family of
interoperable SDRs that operate as nodes in a network to ensure secure wireless
communication and networking services for mobile and fixed forces.
Within the context of JTRS, waveform is a technical term of importance. A
waveform is the entire set of radio and communication functions that occur from
the user’s input to the radio frequency output and vice versa. A JTRS-waveform
implementation consists of a waveform application code, radio set devices, and
radio system applications. Originally, there were 32 JTRS waveforms; that num-
ber has been reduced to nine: Wideband Networking Waveform (WNW), Soldier
Radio Waveform (SRW), Joint Airborne Networking–Tactical Edge (JAN-TE),
Mobile User Objective System (MUOS), Single Channel Ground and Airborne Ra-
dio System (SINCGARS), Link-16, Enhanced Position Location Reporting System
(EPLRS), HF, and UHF SATCOM [13]. The JTRS also considered different form
factors that are defined as the linear dimensions and configuration of a device.
The JTRS program tries to exploit key SDR elements to achieve the develop-
ment of the system within the allocated budget. The key element—government-
purpose rights—ensures software reusability among different product lines. This is
achieved partly by the JTRS information repository, which is available to industry
vendors. Currently, it consists of 3.5 million lines of code, including 15 waveforms
and two operating environments.
The next key element—the open-systems architecture approach—focuses on an
overarching systems-engineering model. This model directs performance, design specifi-
cations, and standards for the operation of the system. It is based on the freely available
software communication architecture (SCA) [13]. The SCA is based on a standardized
operating environment implementing portable operating system interface (POSIX) and
middleware (software that ties together other software blocks) implementing the Com-
mon Object Requesting Broker Architecture (CORBA) standard.
The key feature, as shown in Figure 2.1, is that the receiver is a multiprocessor
system. The software communications architecture (SCA) requires the underlying
operation of the system to realize POSIX application programming interfaces (APIs)
between the operating system and the applications. The communication of different
parts of the software (objects) running on different processors (GPPs, DSPs, and
FPGAs) is of the utmost importance. This is facilitated by the CORBA middleware.
22 Software-Defined Radio
SCA. However, investigations to implement the GPS waveform under the SCA are
underway [6].
The idea of a GNSS SDR gained broad attention with the Ph.D. thesis of D. Akos
[15]. In that work, the core concepts were implemented and 30 seconds of GPS data
was processed until a SPS position fix was achieved. To demonstrate the flexibility,
GLONASS signals were successfully acquired and tracked.
The work by Akos performed all the signal processing in the postprocessing
mode, but soon thereafter a real-time implementation was achieved on a DSP and
on a general-purpose PC [16].
There are many pieces of software from different research groups which do
one part (e.g., fast Fourier transform (FFT) acquisition of CDMA signals) or all
of the GNSS processing in postprocessing and do not claim to be a SDR [17].
Because of the inherent flexibility of SDR, it is difficult to give a precise definition
for a GNSS software radio. For example, a GNSS receiver always has a proces-
sor that is responsible for the user interface and the position, velocity, and time
(PVT) calculation. Often this processor is used for tracking-loop control. In con-
trast, simple communication receivers (e.g., an FM radio) can be built without any
programmable elements. In a strict sense, even very old GPS receivers were SDRs
because at least one programmable element was involved. Additional confusion
arose because GNSS receiver prototyping is commonly done using an FPGA for sig-
nal correlation. Whereas this approach clearly uses a SDR, within the GNSS com-
munity there is a tendency to call only an operational DSP/GPP receiver a software
receiver.
Within this work, we define a GNSS software receiver as a real-time (capable)
GNSS receiver that does all the signal processing after downconversion and sam-
pling on a general-purpose processor. This processor can either be part of an em-
bedded platform or part of a standalone computer. Eventually, some small part of
the signal conditioning (e.g., IF filtering or sample rate reduction) can be done by
a programmable logic device (PLD) or FPGA. In contrast if the major part of the
signal correlation is done in an FPGA, then it shall be called an FPGA receiver. If
the signal correlation is done on an ASIC, the receiver will be called a hardware
receiver. Furthermore, we use the terms receiver and radio synonymously.
USB 2.0 front ends, mostly for the PC sector, see Figure 2.3;
Front ends with proprietary (e.g., serial SPI, SSP) interfaces at the digital-
signal level for the embedded sector;
2.3 GNSS Software Receivers 23
Figure 2.3 Different software receiver front ends, all with USB connectors, from left: embedded
[19] (copyright Maxim Integrated Products), R&D L1 [20] (copyright IFEN GmbH), and reference
station GNSS receiver [21]. (Copyright Fraunhofer IIS; all images reprinted with permission.)
Front ends using COTS ADC converters using their own interface or industry
standards such as PCI or PCI64.
The number of frequencies, the bandwidth of the sampled frequency bands, the
sample rate, and the number of ADC bits influence, to a large extent, the architec-
ture of the software receiver. This is mainly because, with an increasing amount of
incoming data, the processing power requirements and IF sample interface specifi-
cations change. On the other hand, several front-end architectures such as (super)
heterodyne, direct baseband, low-IF, and direct-RF sampling have been used in
conjunction with GNSS software receivers, but the influence of the front-end archi-
tecture on the software is rather low. It is essentially sufficient to adapt the IF within
the software and eventually work with complex (I/Q) or real (I) input samples.
A good front-end design is crucial to achieving high receiver performance be-
cause analog design mistakes can often not be compensated by the processing soft-
ware. Good front ends are, of course, also required for hardware receivers. The
only difference between a hardware receiver and a software receiver front end is
that the software receiver tries to keep the sample rate as low as possible to reduce
the computational load. Typically, the sample rate is chosen slightly above or even
below the Nyquist rate, as will be discussed in Section 6.5. In contrast, the hard-
ware receiver might use a higher sample rate to achieve, for instance, a small corre-
lator spacing. For example, a sample rate of 16 MHz for a GPS C/A-code hardware
receiver allows a correlator spacing down to 0.064 chip, whereas a sample rate of
4 MHz limits the spacing to 0.25 chip (assuming that different correlators are ob-
tained by shifting the generated PRN code signal by one sample).
To illustrate typical front-end designs for software receivers, consider the four
types of front ends whose key parameters are listed in Table 2.1. The first is used
for an embedded system focusing on mass-market applications. The second is a
research and development USB-based front end for L1 signals. The third has been
partly specified at the University FAF Munich and is designed for a GNSS refer-
ence station receiver. The fourth is a flexible signal analysis system (created as a
laboratory setup at the University FAF Munich) that is able to receive two L-band
frequency bands with arbitrary center frequencies.
The information gathered for the embedded system is based on commercial
information, and it should be noted that no specification of the IF filter could be
found [22]. In fact, the data sheet states that the IF filter order should be tailored
24 Software-Defined Radio
to the specific application and that, for example, a standalone GPS receiver will not
require stop-band attenuation as strong as that required by a GPS receiver for an in-
tegrated wireless handset [19]. Furthermore, the spectral purity [i.e., the maximum
height of spikes within the IF power spectral density (PSD)] has not been verified
experimentally.
The embedded system uses two downconversion stages to bring the GPS L1
signal down to 3.78 MHz. Eventually, this allows for the use of a wider range of ref-
erence oscillator frequencies compared to a single downconversion stage. It includes
a SPI serial interface. The SPI is a de facto standard; many processors for embedded
systems include SPI controllers.
The R&D L1 front end converts the RF to the IF of 96 MHz in a single step
and then uses bandpass sampling to further downconvert it to 4.348 MHz [20]. The
sampled IF signal is buffered before transfer over the USB port.
The reference station front end uses a heterodyne architecture. Each RF fre-
quency band is (separately) downconverted to 53.8 MHz. A discrete bandpass filter
limits the bandwidth before sampling and a real-valued IF signal is digitized. The
front end uses a 2- or 4-bit ADC. It has been demonstrated that the 4-bit ADC pro-
vides a significant performance gain if interference is present by being less sensitive
to saturation effects.
The highest flexibility can be achieved with a laboratory setup consisting of a
commercial ADC card (an ICS-572B is used here) with commercial off-the-shelf
(COTS) mixers and local oscillators. The components are connected through
SMA connectors. COTS mixers and low-noise amplifiers (LNAs) are available up
to frequencies of several GHz. The RF is downconverted to a selectable IF with
a single mixer. The ADC Nyquist bandwidth is 52.5 MHz. COTS lowpass fil-
ters are used to discard higher-frequency components. The design requires that
the antenna LNA already includes an RF filter to suppress the image frequency
2.3 GNSS Software Receivers 25
at RF-2xIF. Furthermore, the experimental setup does not give a completely clean
digital IF signal and several spikes appear. The high number of bits results in a
high dynamic range of over 80 dB, making the system well suited for interference/
jamming applications.
When comparing the four front ends, one clearly sees the increase in the num-
ber of frequency bands, bandwidth, bits, and sample rate from the embedded solu-
tion to the signal analysis system. The noise figure is best for the reference station
front end, but it should be emphasized that the total receiver noise figure is mostly
determined by the low-noise amplifier (LNA) that is either integrated into the an-
tenna or located directly behind the antenna. The noise figure of the front end is less
important. All four front ends sample a real signal. This avoids the use of a second
ADC and simplifies the data transfer.
side GPS receiver has a very short TTFF and is capable of tracking the high signal
dynamics.
An asset (e.g., container or car) tracking system could periodically record small
portions of IF samples that are temporarily stored in a nonvolatile memory. At some
point, the data is transferred to a PC that runs the server-side radio. By utilizing
ephemeris data and navigation data bits, the server-side radio can do a data wipe-
off to increase the acquisition sensitivity. Depending on the duration of the sample
snapshot, even high positioning accuracy can be obtained. A related mass-market
application could be to localize images shot by a digital camera. The IF samples
would then be stored with the image [32].
It should also be noted that all receiver performance parameters, such as sen-
sitivity, accuracy, and availability can be perfectly optimized if a GNSS signal is
recorded and processed in a postprocessing manner. For example, in postprocessing
mode, an unlimited number of correlators can be used.
A low number of bits are used to represent the received and the locally gener-
ated signal.
The received signal is processed as a stream (e.g., older signal samples are not
available for a reiteration).
Development efforts are generally high.
Inclusion of high-rate aiding data such as data wipe-off, (semi-)codeless
techniques, or deep GPS/ INS integration requires explicit synchronization
lines.
2.3 GNSS Software Receivers 29
The GNSS signal and system design took into account these limitations. For
example, the GPS signals at baseband can be represented by one bit. The cross-
correlation among different signals is minimized so that a tracking channel can
generally ignore the influence of PRN codes broadcast from satellites other than the
one whose signal is being tracked.
However, these restrictions still limit the signal-processing capabilities of a
GNSS receiver and those limitations become relevant if the receiver performance
needs to be optimized. Currently, the receiver core technologies listed in Table 2.3
have been identified as areas where improvements can still be expected.
The presented limitations are not intrinsically hardware-receiver specific, but
could be solved more easily with a software solution. A high-end GNSS SDR will
run on a high-performance computer that generally consumes increased electrical
power. If it is assumed that sufficient electrical power is available (e.g., via a power
line), then a high-performance computer can provide nearly unlimited processing
power for the radio, simply by using multiple processors or cores. Because these
computers are equipped with large storage devices (RAM or hard discs) they over-
come nearly all deficiencies mentioned in Table 2.3. This work proposes algorithms
that exploit this increased processing power and demonstrates how the resulting
GNSS SDR outperforms an ASIC-based solution.
References
[1] Mitola, J., “The Software Radio Architecture,” IEEE Commun. Mag., Vol. 33, No. 5,
1995, pp. 26–38.
[2] Tuttlebee, W., Software Defined Radio: Origins, Drivers and International Perspectives,
New York: Wiley, 2002.
[3] Rhiemeier, A.-R., Modulares Software Defined Radio, University Karlsruhe (TH), Kaiser-
straße 12, D-76131 Karlsruhe, http://digbib.ubka.uni-karlsruhe.de/volltexte/1000001174,
2004.
2.4 Technology Evaluation and Discussion 31
[4] Walden, R. H., “Analog-to-Digital Converter Survey,” Areas Commun., Vol. 17, No. 4,
1999, pp. 539–550.
[5] Wiesler, A., Parametergesteuertes Software Radio für Mobilfunksysteme, University Karl-
sruhe (TH), Kaiserstraße 12, D-76131 Karlsruhe, http://digbib.ubka.uni-karlsruhe.de/
volltexte/3222001, 2001.
[6] Brown, A., and D. Babich, “Implementing a GPS Waveform Under the Software Com-
munications Architecture,” Proc. 19th Int. Technical Meeting of the Satellite Division of
the Institute of Navigation (ION-GNSS) 2006, Fort Worth, TX, September 26–29, 2006,
pp. 2334–2345.
[7] Danielsen, T., “Creating a GNSS Receiver From Free Software Components,” Proc. 20th
Int. Technical Meeting of the Satellite Division of the Institute of Navigation (ION-GNSS)
2007, Fort Worth, TX, September 25–28, 2007, pp. 2731–2741.
[8] Blossom, E., “Exploring GNU Radio,” Free Software Foundation, http://www.gnu.org/
software/gnuradio/doc/exploring-gnuradio.html, 2004.
[9] Ettus Research, “Homepage of Ettus Research,” Ettus Research LLP, http://www.ettus.
com, 2007.
[10] Peng, S., and B. M. Ledvina, “A Real-Time Software Receiver for the GLONASS L1 Sig-
nal,” Proc. 21st Int. Technical Meeting of the Satellite Division of the Institute of Naviga-
tion (ION-GNSS) 2008, Savannah, GA, September 16–19, 2008, pp. 2268–2279.
[11] Joint Program Executive Office for the Joint Tactical Radio System, “Joint Tactical Radio
System,” U.S. Department of the Navy, http://jpeojtrs.mil/, 2007.
[12] Finish Defense Forces, “Finish Software Radio Project,” Finnish Defense Forces Telecom-
munication Laboratory and Centre for Wireless Communications, http://www.mil.fi/laitok-
set/pvtt/fsrpbook.pdf, 2007.
[13] Anderson, S., and S. A. Davis, “The Joint Tactical Radio System—Reloaded,” CHIPS -
the Department of the Navy Information Technology Magazine, Vol. July–September, No.
2006, pp. 6–9.
[14] Bard, J. D., “Joint Tactical Radio System,” Space Coast Communication Systems, Inc.,
http://www.spacecoastcomm.com/docs/JTRS.pdf, 2003.
[15] Akos, D., A Software Radio Approach to Global Navigation Satellite System Receiver
Design, Athens, OH: Ohio University, 1997.
[16] Akos, D. M., et al., “Real-Time GPS Software Radio Receiver,” Proc. Institute of Navi-
gation National Technical Meeting (ION-NTM) 2001, Long Beach, CA, January 22–24,
2001, pp. 809–816.
[17] Cheng, U., W. J. Hurd, and J. I. Statman, “Spread-Spectrum Code Acquisition in the Pres-
ence of Doppler Shift and Data Modulation,” IEEE Trans. Commun., Vol. 38, No. 2,
1990, pp. 241–250.
[18] Pany, T., J.-H. Won, and G. Hein, “GNSS Software Radio, Real Receivers or Just a Tool
for Experts?” InsideGNSS, Vol. 1, No. 5, 2006, pp. 48–56.
[19] Maxim Integrated Products, “MAX2741, Maxim Integrated L1-Band GPS Receiver
(19-3559; Rev 0),” Maxim Integrated Products, Inc., http://www.maxim-ic.com/products/
wireless/gps/max2741.cfm?CMP=490, 2005.
[20] IFEN GmbH, “NavX®-NSR - GPS/Galileo Navigation Software Receiver, Brochure,”
IFEN GmbH, http://www.ifen.com/content/flyer/NavX-NSR-Flyer.pdf, 2007.
[21] Fraunhofer Institut für Integrierte Schaltungen, “Triband Frontend L1, L2 and L5 with
USB,” Fraunhofer Institut für Integrierte Schaltungen, http://www.iis.fraunhofer.de, 2008.
[22] NXP Software, “swGPSTM Personal, Software GPS solution for PNDs, PMPs and smart-
phones,” NXP Software, http://www.software.nxp.com/assets/Downloadablefile/swGPS-
personal_vs3-13467.pdf, 2007.
[23] Tsui, J. B. Y., Fundamentals of Global Positioning System Receivers: A Software Approach,
2nd ed., New York: Wiley, 2005.
32 Software-Defined Radio
[24] Borre, K., et al., A Software-Defined GPS and Galileo Receiver: A Single Frequency Ap-
proach, Boston, MA: Birkhäuser, 2007.
[25] PR Newswire, “ASUS Selects NXP Software’s swGPS™ for World’s First Mainstream
GPS-Enabled Notebook PC,” http://www.prnewswire.com/cgi-bin/stories.pl?ACCT=
109&STORY=/www/story/09-13-2007/0004662288&EDATE=, 2008.
[26] SiRF Technology, “Company homepage,” SiRF Technology, Inc., http://www.sirf.com,
2008.
[27] Fastrax, “White paper on Fastrax Software GPS receiver: Smart Positioning with Fastrax
Software GPS Receiver,” Fastrax, Ltd., http://www.fastraxgps.com, 2008.
[28] CSR plc, “LC7830FM: CSR’s combined solution for GPS and FM,” CSR plc, http://www.
csr.com, 2008.
[29] Trimble Navigation, Ltd., “Trimble News Release: Trimble and u-Nav Offer High Per-
formance, Low Power GPS Chipset Solutions, TrimCore NEu,” Trimble Navigation, Ltd.,
http://www.trimble.com/news/release.aspx?id=030806a, 2006.
[30] Brown, A., P. Brown, and J. Griesbach, “GeoZigBee: A Wireless GPS Wristwatch Tracking
Solution,” Proc. 19th Int. Technical Meeting of the Satellite Division of the Institute of Navi-
gation (ION-GNSS) 2006, Fort Worth, TX, September 26–29, 2006, pp. 2883–2888.
[31] Won, J. H., S. J. Ko, and J. S. Lee, “Implementation of External Aiding and Novel Pseudo-
range Generation Algorithms for Fast TTFF of GPS Translator System,” Proc. 12th GNSS
Workshop, Jeju Island, Korea, December 1–2, 2005.
[32] NXP Software, “NXP SnapSpot GPS Technology and JOBO photoGPS capture a location
in an instant,” NXP Software, http://www.software.nxp.com/?pageid=139, 2007.
[33] Martin-Neira, M., “A Passive Reflectometry and Interferometry System (PARIS)—Applica-
tion to Ocean Altimetry,” ESA Journal, Vol. 17, No. 4, 1993, pp. 331–355.
[34] Gleason, S., “An Open Source Software Receiver for Bistatic Remote Sensing,” Proc. 20th
Int. Technical Meeting of the Satellite Division of the Institute of Navigation (ION-GNSS)
2007, Fort Worth, TX, September 25–28, 2007, pp. 2742–2748.
[35] Gleason, S., et al., “Detection and Processing of Bistatically Reflected GPS Signals from
Low Earth Orbit for the Purpose of Ocean Remote Sensing,” IEEE Trans. on Geoscience
and Remote Sensing, Vol. 43, No. 6, 2005, pp. 1229–1241.
[36] Pósfay, A., T. Pany, and B. Eissfeller, “First Results of a GNSS Signal Generator Using a
PC and a Digital-to-Analog Converter,” Proc. 18th Int. Technical Meeting of the Satellite
Division of the Institute of Navigation (ION-GNSS) 2005, Long Beach, CA, September
13–16, 2005, pp. 1861–1870.
[37] Averna, The Test Engineering Company, “URT, Universal Receiver Tester,” Averna, Inc.,
http://www.averna.com, 2008.
[38] Humphreys, T. E., et al., “Assessing the Spoofing Threat: Development of a Portable
GPS Civilian Spoofer,” Proc. 21st Int. Technical Meeting of the Satellite Division of the
Institute of Navigation (ION-GNSS) 2008, Savannah, GA, September 16–19, 2008,
pp. 2314–2325.
[39] Humphreys, T. E., L. E. Young, and T. Pany, “Considerations for Future IGS Receivers,”
Proc. American Geophysical Union Fall Meeting, San Francisco, CA, December 15–19,
2008.
CHAPTER 3
In the following section, the GNSS sample handling shall be described. The focus
will be on the real-time mode. In addition, several possibilities for postprocessing
shall be described.
It should be mentioned that by counting the GNSS signal samples, the receiver’s
internal time base is realized, which shall also be described below.
For GNSS receivers, there exist several methods of how IF signal samples can
be accessed by the signal processing unit. An overview is shown in Figure 3.2. In
the simplest case, an incoming IF sample is processed immediately after its output
by the ADC.
This method is used in most hardware-based receivers that do not buffer the IF
samples and have the possibility of processing the samples instantaneously within
multiple channels (this implies, for example, that all code generators and correlat
ors run at the ADC rate or faster). If the hardware receiver uses a massive parallel
33
3.1 GNSS Sample Handling 35
digitized samples are transferred to the processing platform running the software
receiver. The platform sees the received signal as a continuous stream of samples.
There may exist multiple synchronous streams of samples if multiple frequency
bands are received (e.g., GPS L1 and L2).
3.1.1.1 Timing
The streams are required to be synchronously derived from the ADC output; no
sample must be lost when the data is transferred from the ADC to the processing
platform. The ADC itself is required to be synchronized to the front-end oscillator,
which also controls signal down conversion. As a consequence, the internal time base
can be realized by counting the number of received GNSS signal samples. The inter-
nal receiver time is determined by a startup epoch value plus the number of received
samples divided by the sample rate. The value of the startup epoch can, for example,
be read from the PC clock, but can also be set to zero. If multiple streams are pres-
ent, then every stream can be used for time-based calculations because all streams are
required to be synchronous. After reading the startup epoch, external clocks (like the
PC clock) are no longer used and the internal time base is completely based on count-
ing the received samples. After a PVT solution is available, the receiver clock error
is obtained; it estimates the difference between the GNSS time scale and the internal
receiver time scale. It is composed of a constant part related to the startup epoch read-
ing and a time variable part related to the front-end oscillator stability.
It should be noted that reading and adjusting the PC clock and establishing
a relationship to the IF sample stream can be quite imprecise (e.g., on the order
of several tenths of milliseconds under the Windows operating system due to the
10-ms granularity of the Windows task scheduler). There exist special items of hard-
ware, such as an IEEE1588-enabled PCI network interface card, that allow sample
synchronization down to 100-ns accuracy, provided that a hardware connection
between the front end and the interface card is established [1]. If the IEEE1588
time server is properly configured, the samples can be directly synchronized to a
GNSS time scale without the necessity to compute a PVT solution. This could be
important for indoor positioning, where precise synchronization with approximate
coordinates can shrink the acquisition search space drastically.
3.1.1.2 Batch-Processing
The GNSS sample streams are based on a relatively high sample rate being, at
minimum, several megahertz. Because of overhead operations (loop control, data
movement) direct processing of each single sample on a sample per sample basis is com-
putationally too demanding for real-time operations in a GNSS SDR (in contrast to
hardware receivers based either on ASICs of FPGAs). Instead, the samples are grouped
into units called batches. Certain processing parameters (e.g., NCO rates) are kept
constant for all the samples in one batch. This approach is commonly called batch-
processing. There exist several possibilities of what to call those batches; in the
following, the terms packets and frames will be used. These terms do not have any
specific meaning by themselves, but they distinguish several ways of how to group
GNSS signal samples. They will be explained later.
38 GNSS Receiver Structure and Dataflow
caused by the processing and the latency introduced when changing the output
state (e.g., changing the pulse-per-second signal state from high to low). The
error introduced by the extrapolation can be exactly evaluated (and consequently
be minimized) because the receiver itself relies on assumptions on the signal dynam-
ics (e.g., tracking loop filter bandwidths or positioning Kalman filter settings). Also,
hardware receivers have latencies of at least 1–20 ms caused by the tracking loop
update only being performed after the coherent integration has finished [2].
The ipexSR is soft real-time capable. There is no hardware synchronization be-
tween the receiver output (e.g., the computers serial ports) and the GNSS front end,
and it would not make much sense in changing the software to be hard real-time
capable, a task that would require porting the source code onto a real-time OS.
orders of magnitude slower because that data has to be read from the hard disk if
the signal duration under consideration is longer than a few seconds.
The ipexSR architecture is described as a module diagram shown in Figure 3.4. The
single modules have self-explaining names and are described in detail in the follow-
ing sections. A module is actually realized as a number of C++ classes. The ipexSR
is a multithread program. Modules that are executed within the same thread have
the same style in Figure 3.4.
The data exchanged by the modules is described in Table 3.1. As the soft-
ware receiver is configurable, the number of modules and the number of threads is
configuration-dependent. Specifically, the number of receiver modules in Figure 3.4
depends on the configuration, and only for Receiver 1 in Figure 3.4 is the master
channel/channel structure shown. All other receivers share the same structure, with
the exception of the last receiver, which acts as a spectrum analyzer.
The main data flow in the software receiver is as follows. The USB front-end
driver collects the IF samples from the GNSS front end, groups them into packets
and stores them in the IF sample buffer. The packets also contain data from external
sensors captured within the same time span covered by the respective packet. The
master receiver retrieves a packet from the IF sample buffer and passes it to the
acquisition manager and the receivers. The acquisition manager, with the Level 1
and 2 acquisition units, acquires the GNSS signals and the receivers track the GNSS
signals. A dedicated receiver acts as an IF spectrum analyzer. Each receiver utilizes
several master channels to track different satellites. Each master channel needs one
or two channels that do the correlation for the data and for the pilot signal com-
ponent. The master receiver retrieves from each receiver the pseudorange measure-
ments and passes them to the navigation processor. A PVT solution is calculated,
verified, visualized, and output to other user applications. The most recent PVT is
kept in memory (called receiver status) and is used to aid signal acquisition (e.g.,
Doppler prediction) and tracking (e.g., vector tracking).
In the following subsections, a brief functional description of each module shall
be given.
front-end driver, which includes the sensor data in the IF sample packets. Currently,
different IMUs, a magnetometer, a barometer, Wi-Fi power readings, and external
NMEA strings are supported.
A particularly simple external sensor is the PC clock, which can be (optionally)
read on a continuous basis. For each IF sample packet, the corresponding PC clock
reading is stored. Reading the PC clock allows the software receiver to also deter-
mine the PC clock error with respect to a GNSS time scale. Assuming that the PC
clock error varies smoothly, it allows other applications running on the same PC to
have access to the GNSS time scale. If the PC is equipped with a special clock, as
described by researchers of the Institute of Embedded Systems, then the time trans-
fer could potentially be with submicrosecond accuracy [1]. The internal PC clock
allows only an accuracy of several 10 ms.
3.2.6 Receiver
For each tracked GNSS service (e.g., the GPS C/A-code, or the Galileo E5a Open
Service) there exists one receiver module that controls a configurable number of
master channels to track satellite signals for this service.
The receiver module does service-specific preprocessing (e.g., pulse blanking for
E5/ L5/ E6 services) and then subdivides the sample packet into smaller frames, as
42 GNSS Receiver Structure and Dataflow
described in Section 3.1.1.2. Within a loop, all frames are processed by one master
channel by calling the respective master channel functions. After the loop ends, the
receiver advances to the next master channel. This has been found to be more ef-
fective than passing one frame of data to all master channels and then advancing
to the next frame. All the channel configuration data including PRN codes are kept
in the CPU’s memory caches. The receiver module also controls the multicorrelator
behavior of the master channels, which are later used for signal quality monitoring.
If the receiver is configured to work in the multiplexing multicorrelator mode, it
sequentially activates and stops the multicorrelator mode of the respective master
channels.
Normally there exists a special receiver module—called the spectrum analyzer—
that computes and monitors the power spectral density of the received IF sample
streams. This receiver module neither uses master channels nor subdivides packets
into frames.
3.2.8 Channel
The channel run within the same thread as their controlling receiver module. One
channel retrieves IF sample frames from the controlling master channel with NCO
values. The channel does the reference signal generation, the correlation, the bit
synchronization, navigation data bit extraction and navigation data message decod-
ing. The channel returns, at maximum, one set of correlation values to the master
channel per frame. Decoded navigation data messages are stored in the navigation
record data structure.
After an acquisition has been initiated, the acquisition manager first determines
which signals can be expected by the receiver to be above a certain elevation cutoff
angle and which are already tracked. If a PVT solution is available and if ephemeris
or almanac data is available, the code phase and Doppler search range are deter-
mined. If both values can be determined very precisely (e.g., less than one chip and
a few hertz accuracy) then this signal is considered to be acquired. This is called
vector acquisition because, similar to a VDLL, the code phase and the Doppler
are derived from the navigation solution. If vector acquisition is not possible, the
parameters are put into a list. This is repeated for all signals and finally the list is
passed to the level-1 (cold-start) acquisition module. This module tries to acquire
the signals contained in that list by FFT methods. Signals that cannot be acquired by
the level-1 module are then passed to the level-2 (warm-start) acquisition module.
Finally, the results from all three acquisition methods (vector, level 1, and level 2)
are combined and returned to the master receiver.
includes RAIM; it also informs the master receiver to stop tracking satellite signals
that are identified to have gross errors. If no position at all can be determined for a
certain number of epochs, the master receiver is totally reset to acquire all signals
from scratch. All signals that can be successfully verified that their code pseudo
range and Doppler match the PVT estimates are flagged to be acceptable.
as the server and the other to act as the client. Multiple clients can connect to one
server.
versions of Figures 3.9, 3.10, and 3.11. A critical path can flow across different
threads at different times during a program’s execution. A serial application has a
single flow, which corresponds to the default critical path. A multithreaded applica-
tion has multiple flows. The critical path determines the total execution time of the
software; code along the critical path should be optimized. Optimizing code outside
the critical path does not reduce the software execution time.
It should be mentioned that thread profiling causes a computational overhead
of several percentages. The software execution flow thus slightly differs from a
program run without profiling.
Two real-time configurations shall be investigated in the following section: the
first runs a multifrequency configuration on a multicore computer; the second in-
vestigates a single-frequency configuration on a single-core computer.
computer is perfectly loaded. If the concurrency level exceeds this number, the sys-
tem is overutilized, and if the number is less than the number of cores the system is
underutilized. During pure tracking (Figure 3.6), most of the time only one thread
is executed. Alternatively, the system may wait for IF samples, which is indicated as
a concurrency level of zero.
If signal acquisition is performed, the average concurrency level increases by
one, as shown in Figure 3.7. Both figures also demonstrate that the computer is
required to run four or more tasks simultaneously only in rare cases.
packet (if available) and start its preprocessing while the Receivers work on the old
packet. This is, for example, visible during time spans of 55.00 to 55.01 seconds
in Figure 3.9. Normally, however, there is no need for overlapping because the re-
ceivers finish before the next packet arrives. In Figures 3.9 and 3.10, one sees that
different receivers need different execution times. The C/A-code receiver (receiver 0)
needs the most time because it tracks the most signals. The spectrum analyzer (re-
ceiver 5) is activated only for every second packet.
When an acquisition is running the computer spends fewer resources for track-
ing and signal processing; in some receivers tracking is slightly delayed (see time
span 41.61– 41.74 seconds of Figure 3.10).
Figure 3.10 Threading time line for the multifrequency GNSS receiver configuration running on four CPU cores (acquisition running).
Figure 3.11 Threading time line for the single-frequency GNSS receiver configuration running on one CPU core.
50 GNSS Receiver Structure and Dataflow
interface compared to Section 3.3.1.2. The time span covered by one packet of IF
samples is 91 ms.
The concurrency level is shown in Figure 3.8 for the threading time line of
Figure 3.11. Most of the time the software has a need to run two threads simulta
neously, but only one can be executed. The system is overutilized. The threading
time line in Figure 3.11 is also quite different from the ones of Section 3.3.1. It ap-
pears to be more chaotic because the computer switches between the threads. The
computer itself has a processing load of nearly 100% when running the GNSS SDR
with the thread profiler.
The thread time line in Figure 3.11 shows no visible structure in the IF sample
reception thread. The respective thread “threadFunc” is more or less continuously
active while receiving data from the USB port. Interestingly, IF sample packets are
usually processed in groups of two, as can be seen from the active time line of the
Master Receiver or the Receiver 0. One packet contains data for 91 ms and the Mas-
ter Receiver is active around five times within a period of 1 second. Signal tracking
almost completely pauses while an acquisition is running (time span 189.3–191.5
seconds of Figure 3.11). Immediately after acquisition, the signal processing runs
faster than in real time to recover the lost time.
The ipex software receiver can be configured to work as a GNSS reference station
and the algorithms and parameters required for this operation mode shall be out-
lined in the following sections.
A GNSS reference station is typically located at a fixed site and the antenna usu-
ally has a good field of view to track all GNSS satellites above the horizon. GNSS
measurements (code and carrier pseudorange) are produced with a rate of 1 Hz and
are used as correction data for roving GNSS receivers, to precisely determine the
position of the reference station itself or for other professional and scientific ap-
plications. A reference station is required to track all signals in view and to provide
measurements of highest accuracy and reliability.
the C/A-code on L1 is the acquisition signal for GPS. After it is acquired, code phase
and Doppler for L2 (or L5) can be calculated and a handover from L1 to the other
frequencies is performed.
The single coherent integration method is less memory-consuming than using
multiple coherent integrations and is usually sufficient to acquire rising GPS signals
as soon as the satellite has an elevation higher than 2–3°. Usually, the signal is
tracked continuously until the satellite disappears behind the horizon.
We found it useful to have a backup solution for GIOVE-A E5a that is used
when the handover from E1 is not working. The backup solution is realized as
warm-start parameters for the Q-channel of E5a (see Table 3.2). During the early
GIOVE-A tests, the code phase relationship between E1 and E5a was not totally
clear from open literature. It is not possible to do a handover if an unknown code
phase offset between E1 and E5a larger than 30m is present. The lack of ephemeris
and almanac data for GIOVE-A (remember, this was a test satellite) does not allow
for a warm start on E1. Only when E1 is already tracked can the Doppler search
range for E5a be limited.
chose the first option for simplicity. The BOC(1,1) signal on GIOVE-A E1 is tracked
using an S-curve shaping correlator (see Section 8.2.4) and the E5a signal is tracked
with an early-minus-late d = 0.66 correlator.
Tracking loop filters are summarized in Table 3.3, together with the coherent inte-
gration time. The GPS C/A carrier phase is tracked with an atan (one-quadrant) phase
discriminator, and the full carrier phase is derived by decoding the sign of the GPS
C/A NAV message preamble. For all other signals, an atan2 (four-quadrant) phase
discriminator is used that works only on the pilot component. At this writing, the
GPS L2 CM signal is broadcast as a pilot signal without any navigation message.
Sub-Nyquist sample rates are used to reduce the computational load (see Section
6.5). The front-end samples the IF signals with a rate of 40.96 MHz. All frequency
bands have a 3-dB bandwidth of around 13 MHz. For tracking, the sample rate is
reduced to 20.48 MHz. For GPS C/A tracking, the sample rate is further reduced to
10.24 MHz, if the C/N0 lies above 40 dBHz.
Figure 3.14 Code-minus-carrier and estimated C/N0 for GIOVE-A, January 1, 2009.
The code and carrier tracking performance is shown in Figure 3.13 and Figure
3.14 for one pass of GPS PRN7 and GIOVE-A. The code error is computed by first
subtracting the code pseudorange from the carrier pseudorange (both expressed in
meters). Then a fifth-order polynomial is fitted to the difference. The polynomial
models the ionospheric delays. The ionospheric delay is removed from the differ-
ence and the result is plotted as “code error” in both figures. The carrier phase is
totally cycle-slip free if data with an elevation larger than 7° is considered. The
figures also show the estimated C/N0.
Apart from L2, the Figure 3.13 and Figure 3.14 confirm the expected signal
performance. GPS C /A code pseudoranges are of decimeter accuracy and E5a has
the highest accuracy due to its large bandwidth. For low elevations, all signals show
strong C /N0 fluctuations caused by multipath effects. The antenna was mounted
on top a metal roof. For L2, code tracking errors of maximally 2m occur and the
C /N0 value is reduced by the interference. The rather large code tracking errors
result from the reference signals not being recomputed frequently enough. The cor-
relation point wanders outside the linearity region and this degrades the code phase
estimation accuracy (see Section 4.3.2.10).
3.5 Discussion
Several possibilities for the data flow in a GNSS SDR have been discussed in this
chapter. For real-time processing (and in practice for most postprocessing configu-
rations), the immense data volume processed by a GNSS SDR constrains the signal
3.5 Discussion 55
processing to work on only a limited portion of the IF sample data at a time (batch
processing). The most striking difference between a hardware and a software re-
ceiver is the capability and necessity for a software receiver to buffer IF samples.
A specific receiver architecture has been presented that was used to develop
the ipexSR as a multifrequency GNSS SDR. The architecture was carefully chosen
and not only influences the resulting software itself, but also relates to the software
development process. It is important that the architecture fits the development en-
vironment (here C++), the involved developing team members and their specific
capabilities. To some extent, the team requirements may have an even larger impact
on the architecture than the technical requirements.
Different options exists to run a GNSS SDR in real time. The ipexSR is imple-
mented in a manner to allow its operation in real time (in a loose sense) under
Windows (a non real-time operating system). The software itself utilizes multiple
processors or cores and is separated into several threads. A threading analysis shows
that the receiver behaves very much like a real-time GNSS SDR in a strict sense. It
immediately reacts after receiving the IF samples if (and most likely only if) there is
a sufficient number of cores available and if the processing load is clearly beyond
100% (e.g., around 50–60% at a maximum). In that case, acquisition and tracking
can run in parallel. If the software gets heavily loaded to more than 70–80%, there
will be, for example, tracking pauses of several seconds or other phenomena that
may occur that should not be present in a real-time GNSS SDR.
References
[1] Institute of Embedded Systems InES, “IEEE1588 Enabled PCI Network Interface Card,”
http://ines.zhaw.ch/institut/products/ieee-1588-hardware/page.html, 2008.
[2] Trimble Navigation Ltd., “MS750, Dual Frequency RTK Receiver for Precise Dynamic
Positioning,” http://trl.trimble.com/docushare/dsweb/Get/Document-12640/ms750pp.pdf,
2007.
[3] van Grass, F., et al., “Comparison of Two Approaches for GNSS Receiver Algorithms:
Batch Processing and Sequential Processing Considerations,” Proc. 18th Int. Technical
Meeting of the Satellite Division of the Institute of Navigation (ION-GNNS) 2005, Long
Beach, CA, September 13–16, 2005, pp. 200–211.
[4] Closas, P., et al., “Bayesian Direct Position Estimation,” Proc. 21st Int. Technical Meeting
of the Satellite Division of the Institute of Navigation (ION-GNNS) 2008, Savannah, GA,
September 16–19, 2008, pp. 183–190.
[5] Brown, A., P. Brown, and J. Griesbach, “GeoZigBee: A Wireless GPS Wristwatch Tracking
Solution,” Proc. 19th Int. Technical Meeting of the Satellite Division of the Institute of Nav-
igation (ION-GNNS) 2006, Fort Worth, TX, September 26–29, 2006, pp. 2883–2888.
[6] Anghileri, M., et al., “Performance Evaluation of a Multi-Frequency GPS/Galileo/SBAS
Software Receiver,” Proc. 20th Int. Technical Meeting of the Satellite Division of the In-
stitute of Navigation (ION-GNNS) 2007, Fort Worth, TX, September 25–28, 2007, pp.
2749–2761.
[7] Stöber, C., et al., “Implementing Real-time Signal Monitoring within a GNSS Software Re-
ceiver,” Proc. European Navigation Conference (ION-GNNS) 2008, Toulouse, April 22–25,
2008.
[8] Intel Corp., “Thread Profiler 3.1 Build:0.25466,” http://www.intel.com, 2007.
Chapter 4
Signal Estimation
This chapter will introduce signal-estimation techniques that are used by a naviga-
tion receiver. The basic goal of the receiver is to obtain an estimate of its position;
this will be optimized with respect to application-specific criteria. One option is to
look for unbiased estimators having minimum variances. Another option could be
to minimize the mean-squared error if a priori stochastic information on the dis-
tribution of the true position (e.g., a user-motion model) or of any related quantity
is available. The first approach is called nonrandom parameter estimation because
the position is regarded as an unknown but otherwise deterministic quantity. In a
GNSS receiver, for example, pseudorange estimation (an intermediate step before
position estimation) is usually treated as a nonrandom parameter-estimation prob-
lem. The second approach—Bayesian parameter estimation—treats the position as
a random variable. Kalman filtering, being a Bayesian technique, is commonly used
to estimate user positions in a navigation receiver.
The chapter starts with nonrandom parameter-estimation techniques, which
are regarded in this context to be more fundamental, simply because no stochastic
models are needed. Furthermore, the nonrandom parameter-estimation approach
allows the use of powerful analytical tools, like the Cramér–Rao lower bound
(CRLB). The described estimators try to achieve this lower bound as much as pos-
sible. Nonrandom parameter estimation is especially useful if GNSS signal param-
eters such as code phase, Doppler, frequency, or amplitude are considered, because
no (or only very crude) stochastic models are available for them. Bayesian estima-
tion (or more specifically, a Kalman filter) is a suitable approach if the estimated
parameters are the user position or trajectory. In that case, better stochastic models
are available. In other words, it is possible to provide useful stochastic information
for the movement of a vehicle, but range-measurement errors from the vehicle to
a satellite, including multipath, atmospheric delays or satellite orbit/clock errors,
may require sophisticated modeling that is too complex for practical use within the
signal-processing stage.
Reviewing the algorithmic design of a navigation receiver is of special impor-
tance for a SDR receiver because the SDR concept potentially provides the possibil-
ity to redesign well-known navigation signal-processing techniques to implement
more complex algorithms as compared to hardware receivers.
Before starting with the description of the different estimation techniques, it is use-
ful to look at the parameters to be estimated in a navigation receiver. Three classes
of parameters will be identified: position, low-rate pseudorange data, and high-rate
57
58 Signal Estimation
pseudorange data. They are summarized in Table 4.1 and will be explained below.
Nuisance parameters are introduced for formal reasons and extend the useful pa-
rameters to have a complete model for the deterministic part of the received signal.
All parameters can either be constant during the signal-estimation process or
be time-dependent. In the latter case, we assume that the time dependence can be
described through a finite number of parameters. For example, if a moving user is
considered, we assume that his or her trajectory can be modeled sufficiently well by
a spline or similar curve, which is determined by a finite number of parameters.
estimated values. Typically, the nuisance parameters are a subset of the high-rate
pseudorange data. Formally, the nuisance parameters can be estimated along with
the other parameters.
For certain applications, the carrier phase and the signal amplitude can be
treated as nuisance parameters. Another example of nuisance parameters includes
signal parameters of possibly present multipath signals. However, for both exam-
ples it may be reasonable to consider them as useful parameters for a different set
of applications, especially if correlations between them and the position estimates
are important.
The nuisance parameters ξ are treated as random variables, described by a
probability density function p(ξ). This implies that any parameter for which we are
not able to give a probability density function must be treated as a useful parameter,
even if we are not interested in its estimated value.
In each of these three cases, the same signal model is meant; it is just expressed
as a function of different parameters. The nuisance parameters ξ are identical for
all three cases.
For the useful parameters, we assume that they represent different ways of
expressing the user’s position. Therefore, high- or low-rate pseudorange data is
uniquely defined by the position:
q = q(p(x)) (4.2)
Equation (4.1), together with (4.2), states that the signal does not directly de-
pend on the position, but only through the pseudorange q, respectively through p
and q. The vector q is typically of a higher dimension than p, which is itself of a
higher dimension than x. Fulfilling (4.2) in a real-world situation can be compli-
cated; many modeling efforts are necessary to account for user dynamics, multipath,
atmospheric effects, transmitter position/clock errors, and other effects. Neverthe-
less, for the theoretical discussion we assume that the condition (4.2)—called suf-
ficient modeling—is fulfilled.
in the sense that there should be no estimator having a smaller variance matrix. For
a definition on how to compare two matrices, see (4.15). Consequently, the position
estimator shall be a minimum variance unbiased estimator (MVUE). It is known
that an MVUE is, in general, not optimal if the MSE (or RMS) value
MSE xˆ S, x
= (xˆ (S) - x)*(xˆ (S) - x) (4.6)
S, x
is considered; that is, biased estimators may exist whose MSE is below the MSE of
an MVUE (e.g., noise-variance estimation as described in IV.D.2 of [1] illustrates
this case). Minimum MSE estimators are Bayesian estimators; they require a priori
knowledge on the distribution of x and are discussed in Section 4.5.
Additionally, it should be noted that unbiased estimators are important in high-
precision navigation problems (see Table 4.2). There, the variance can, in general,
be reduced by collecting more data by, or by using a longer time span to determine
the position. Another option is to increase the bandwidth of the received navigation
signal (e.g., change from a low-bandwidth front end to a high-bandwidth front end)
and thus obtain more independent samples within the same observation period.
The signal samples comprise a deterministic part, rm, and an additive stochastic
component Nm,
x, ξ = const. S
= N
(4.8)
x = const. S
= N ,ξ
(4.9)
where N(0,1) denotes a Gaussian distribution with zero mean and unit variance,
such that (1.16) holds.
A Gaussian noise model is considered because only in that case can comparable
simple analytical expressions for the CRLB be derived. The evaluation of the CRLB
in non-Gaussian noise is cumbersome and, in most cases, analytically impossible
[2]. Furthermore, thermal noise is well-described by a Gaussian model. A detailed
discussion of non-Gaussian noise occurring during signal quantization is given in
Section 6.1.
1 1 L 2
px (s) = L
exp - sµ - rµ(x) (4.11)
(2π ) 2 µ =1
1 rµ (x) rµ(x)
log px (S) = (Sµ - rµ(x)) + (S µ - rµ(x))
xi 2 µ
xi xi ÷
rµ (x) rµ(x)
= Re (Sµ - rµ(x)) = Re Nµ (4.13)
µ
xi ÷ µ
xi ÷
rµ (x)
= Re Nµ
µ
xi ÷
rµ (x) rυ (x)
I x;i, j = Re Nµ Re Nυ
µ
xi ÷ υ
xj ÷
N
rµ (x) rµ(x)
= Re
µ
xi xj
var xˆ (S) N
= (xˆ (S) - x)(xˆ (S) - x)* I x -1 = CRLB(x) (4.15)
N
D = var xˆ (S) N
- I x -1 (4.16)
is nonnegative definite (i.e., all the eigenvalues of D are greater than or equal to
zero).
Any unbiased estimator that achieves the CRLB is called an efficient estimator.
An MVUE is not necessarily efficient, because “minimum” does not mean equality
holds on the Cramér–Rao inequality (4.15).
The equality to the CRLB of an unbiased estimate xˆ (S) is achieved if and only if
log px (s)
xˆ (s) - x = I x -1 (4.17)
x
at least somewhere around x; see Theorem 3.4 from the book by Porat [4].
1 1 L 2
px,ξ (s) = L
exp - sµ - rµ(x, ξ ) (4.18)
(2π ) 2 µ =1
64 Signal Estimation
1 1 L 2
p x (s) = L
exp - s µ - rµ (x,ξ)
(2π ) 2 µ =1
ξ
(4.19)
L
1 1 2
= L
exp - s µ - rµ (x ,ξ) p(ξ)dξ
(2π ) 2 µ =1
where the conventional Fisher information matrix Ix;I,,j (ξ) is given by (4.12) using
fixed values for the nuisance parameters ξ. A particularly simple case occurs if the
conventional Fisher information matrix does not depend on ξ, in which case the
MCRLB equals the CRLB.
4.2 Nonrandom Parameter Estimation 65
var xˆ (S) N ,ξ
= (xˆ (S) - x)(xˆ (S) - x)* I�x -1 = MCRLB(x) (4.21)
N ,ξ
It should be noted that the CRLB of Sections 4.2.1 and 4.2.2 are not identical;
only in the second case are nuisance parameters considered at all.
The modified Fisher information matrix satisfies Fisher’s five properties and
thus qualifies as an information quantity (see page 60 in the book by Porat [4]).
Fisher’s idea was to express the information carried by the density px(S) about the
parameters x in quantitative terms. Most important, the larger the sensitivity of
px(S) to changes in x, the larger should be the information.
A necessary and sufficient condition for the equality in (4.21) of an unbiased
estimate xˆ (S) to hold is
Simplified expressions for the calculation of the first element of the inverse com-
bined Fisher information matrix are given by Moeneclaey [7].
The ACRLB is generally stricter than the MCRLB:
The JCRLB is larger than the ACRLB (ACRLB JCRLB). When the combined
Fisher information matrix is not a function of ξ, the ACRLB equals the JCRLB (and
to the CRLB). When the combined Fisher information matrix does depend on ξ, the
ACRLB is below the average (over ξ) of the JCRLB.
The JCRLB assumes a specific-estimation strategy (the joint estimation of x and ξ),
which is indicated by the subscript J in (4.27). The JCRLB is less general than the true,
asymptotic, or modified CRLB. The joint estimation does not make use of the a priori
probability density function p(ξ). In the examples of Moeneclaey [7], not using this
information yields suboptimal estimators, especially for low signal-to-noise ratios.
4.2.2.4 Discussion
With the presence of nuisance parameters, the computation of bounds for the use-
ful parameters becomes analytically difficult because of the complex evaluation
4.2 Nonrandom Parameter Estimation 67
of the true CRLB based on (4.19). However, it is important to recognize that the
provided probability density function for the nuisance parameters represents use-
ful information that should enter the computation of any bound or the estimator
design. Ignoring this information and treating the nuisance parameters as nonran-
dom parameters may incorrectly value the influence of the nuisance parameters and
produce bounds or estimators diverging from the true CRLB.
The presented bounds are related to each other in the sense
and
In the limit of infinitely high signal-to-noise ratio, the ACRLB equals the CRLB.
The JCRLB equals the CRLB if only a subset of all possible estimators are consid-
ered (the jointly estimating ones).
In this section, we again assume that either x completely describes the received
signal sample distribution or that the nuisance parameters have fixed known values.
Both assumptions imply that (4.8) is valid.
For sufficiently smooth probability density functions, a necessary condition for
the ML estimate is the likelihood equation:
In general, the MLE (4.31) does not have the desired optimal properties (4.4)
and (4.5). Thus, it is a priori not clear if the MLE is an MVUE. However, the fol-
lowing two facts make it a good candidate: First, finding the positions that make
the observations most likely is a legitimate criterion on its own [1]. Second, within
the limit of an infinite number of samples, it can be shown, under very general as-
sumptions, that the MLE is consistent (i.e., that the estimated position converges
in probability to the true value and thus becomes unbiased, see proposition IV.E.1
in the book by Poor [1]). Furthermore, under some regularity conditions for the
derivates of rm(x) with respect to x, the variance of the estimated parameters ap-
68 Signal Estimation
proaches the inverse of Fisher’s information matrix (see proposition IV.E.2 of [1])
and the estimated parameter’s probability density function approaches a Gaussian
density; that is, xˆ ∼ N(x, I x -1). This property is called asymptotic normality of the
MLE. A high number of samples has a similar effect to a high signal-to-noise ratio,
thus the statement can be rephrased in the sense that, for signal-in-noise problems
like (4.7), the ML estimates are Gaussian and achieve the CRLB for high signal-to-
noise ratios (see Example 7.6 in the book by Lehmann [3]).
For a finite number of samples and an arbitrary (not high) signal-to-noise ratio,
no strict mathematical theorem is available that would state that the MLE is unbi-
ased or that its variance is optimal.
Consistency and asymptotic normality also hold if the admissible values for x
are limited; that is, L n. It is, however, required that the solutions of the likeli-
hood equation (4.32) converge to an isolated root and cases where the likelihood
equation has multiple roots may require a more careful analysis.
In the case of Gaussian noise (4.11), the likelihood equation (4.32) is written as
rµ (x) (4.33)
Re (sµ - r µ(xˆ )) x = xˆ ÷÷ = 0
µ xj
which is equal to the least-squares (LSQ) estimate of the position x based on the
measured samples s. It minimizes the squared differences of the signal model minus
the received samples in the observation space. It should be noted that the solution
of the least-squares equations (4.33) retains, in the limit of an infinite number of
samples or high signal-to-noise ratio, the properties of consistency and asymptotic
normality (in the same sense as for the MLE), even if the noise is non-Gaussian
distributed (see proposition IV.E.3 in the book by Poor [1]). However, in the non-
Gaussian case, the CRLB is not given by an expression like (4.14) and better esti-
mators than the LSQ may exist. In other words, for non-Gaussian noise, the LSQ
estimated parameter distribution approaches an unbiased Gaussian distribution
whose covariance matrix is given by (4.14); for the non-Gaussian case, (4.14) does,
in general, not define the Fisher information matrix. It should also be pointed out
that, for the general case of a finite number of samples and a finite signal-to-noise
ratio, (4.33) may have no solution at all.
Equation (4.31) can, in principle, be used to realize a special type of GNSS
receiver that directly solves this equation by, for example, a brute-force maximiza-
tion routine. This could be done in a special operation mode, called snapshot mode.
A number of samples are collected (e.g., several tens of milliseconds) and, starting
from an approximate position value, (4.31) is iteratively solved. However, to the
author’s knowledge, no such receiver has ever been realized, but results with simu-
lated data are presented in works by Closas [8]. Snapshot receivers exist [9], but
they determine the pseudorange for single signals first and do not directly relate the
received signal samples to the position [10].
Directly estimating the position from the samples provides a number of theo-
retical advantages because the number of estimated parameters is minimal since
no intermediate pseudorange parameters are introduced. Optimum handling of
signal interference caused by multiple transmitted signals is ensured because the
ML principle tries to match the sum of all replica signals to the received signal
4.2 Nonrandom Parameter Estimation 69
samples. In contrast a typical GNSS receiver ignores the influence of other signals
on parameter estimates of a specific signal. It may even happen that the likelihood
function has a clear and unique peak only if the likelihood function is expressed
as a function of x, but not if each signal is considered separately. In this sense, the
single-step estimation optimizes the signal-to-noise ratio.
ra, µ (q a,i )
Re (sµ - r µ(qˆ )) q a,i = qˆ a,i =0 (4.35)
µ La,i
q a,i
With a similar argument as given for the LSQ position estimator (4.33), it can
be argued that, in the limit of an infinite number of samples or high signal-to-
noise ratio, the LSQ high-rate pseudorange data estimate is optimal [i.e., it is a
MVUE (Gaussian noise assumed)]. In this case, the probability density function of
the high-range pseudorange data estimate approaches the Gaussian multivariate
density N(qa,i, Iq–1) [1]. The symbol Iq–1 denotes the Fisher information matrix for
the high-rate pseudorange data q.
To obtain the position estimate from the high-rate pseudorange data via an LSQ
adjustment, we start from the following observation equation
q
qˆ = q(x 0 ) + x = x0 × Dx + vq (4.36)
x
with the following stochastic model for the high-rate pseudorange measurement
errors vq:
vq = 0, var vq = Sq (4.37)
N N
It should be noted that the LSQ estimate does not require that the distribution
for vq is Gaussian. The LSQ is also the best linear unbiased estimator for non-
Gaussian distributions.
If the high-rate pseudorange estimates are optimal, then q = Iq–1.
The symbol
q qi (4.38)
÷ = x = x0
x i, j xj
denotes the design matrix. Note, the linear approximation in (4.36) needs to be a
sufficient approximation and the linearization point x0 needs to be sufficiently near
the true value x. The least-squares adjustment of the observations q with respect
to the improvements in the parameters Dx yields the best linear unbiased estimator
(see [11]),
q -1
Dxˆ = J -1 Sq (qˆ - q(x 0 )) (4.39)
x
T
q -1 q (4.40)
J= ÷ Sq
x x
4.2 Nonrandom Parameter Estimation 71
By employing this two-step procedure (first estimating q̂ and then Dxˆ ), we fi-
nally obtain an estimate for the position as
xˆ = x 0 + Dxˆ (4.41)
xˆ N
= x, cov xˆ N
= J -1 (4.42)
qk q
Ji , j = (Sq )-1k,l l (4.43)
k,l xi xj
4.2.4.3 Discussion
This section has discussed different strategies to obtain position estimates and has
introduced several bounds on them. It has been shown that, under certain assump-
tions, the single-step estimation of the position and cascaded procedure potentially
achieve the same “‘performance,” (i.e., the CRLB is achieved for all approaches). If
one of the assumptions is not valid, the single-step estimation might result in a bet-
ter positioning solution compared to the cascaded procedure. The question arises:
which practical cases may cause a violation of those assumptions?
To answer the question, we recall that for both cases (i.e., single-step and cas-
caded procedure) the signal model (4.1) must be correct. It relates the signal samples
to the position parameters r = r(x). For example, if multiple propagation paths are
present, they must be included in the model to obtain an unbiased solution. In that
sense, the direct ML approach is not superior to the cascaded approach.
72 Signal Estimation
More interesting is the case when the assumption of sufficient modeling (4.2)
cannot be achieved. This is the case that occurs if the signal samples are not a
function of delay, Doppler frequency, and phase, but show a more complex de-
pendence on the position. This is, for example, the case when the propagation
of electromagnetic waves emitted by the GNSS transmitters (satellites) cannot be
described by geometric optics and when complex wave-phenomena occur. A prac-
tical example could be a signal creeping of GNSS signals along an aircraft body
[12]. But generally, in that case, the direct solution of (4.33) seems to be extremely
demanding.
To achieve the optimum positioning result with the cascaded procedure, it is
necessary that the full covariance matrixes are passed from the high-rate pseudor-
ange estimation to the position estimation and that the matrices are used in the LSQ
adjustments. This condition, however, can often not be fulfilled in practice.
It has been assumed that the high-rate pseudorange estimators are MVUEs,
which is achieved by ML estimation under the assumption of a high signal-to-noise
ratio or a large number of samples. Additionally, in Section 4.7.1 we will show that,
for uncorrelated and uniformly distributed carrier phases, a modified MLE scheme
(a single LSQ step under the assumption that the linearity conditions of Section
4.3.2.10 are fulfilled) achieves the CRLB for arbitrary signal-to-noise ratio values.
If the signal model considered here can be linearized, the MLE becomes a
MVUE (even for low signal-to-noise ratio). Furthermore, Bayesian techniques
(e.g., a Kalman filter) can be used easily after linearization. The prerequisite
for linearization is that good approximate values for the parameters are avail-
able. If a Kalman filter is used, the approximate values are the predicted values.
Then the prerequisite for linearization is that the predicted parameter values—
especially the carrier phase if a coherent Kalman filter is used—are precise (see
Section 4.5.4).
The discussion of what happens if the MLE of (4.31) or of (4.35) is not a
MVUE is difficult. Looking for an alternative to an MLE is not easy; in fact, it is
not always guaranteed that a MVUE (or any better estimator than the MLE) exists.
For example, the possible strategy to construct a MVUE via sufficient statistics and
the Rao–Blackwell–Lehmann–Scheffe theorem is of little practical value [3]. In fact,
the vast majority of practical estimators are MLEs [3].
If no optimum estimation scheme can be provided, we may speculate that, in
this case, the direct solution of the position via (4.33) is superior to the cascaded
procedure; this is due to the fewer degrees of freedom involved, which thereby in-
creases the effective signal-to-noise ratio.
This section contains a further analysis of the LSQ estimators for high-rate pseudo-
range data that have been defined by (4.35) in Section 4.2.4.1. Estimators for code
phase, Doppler, carrier phase, and signal power will be derived. The effect of the
unknown carrier phase—causing the squaring loss—will be discussed. The impor-
tance of limiting the admissible range of the estimated values will be emphasized,
4.3 LSQ Correlators/Discriminators 73
and various solutions (single-channel tracking, vector tracking, and reiteration) will
be compared. After a general introduction, the section concentrates on the case
where only one line-of-sight signal is received; in Section 4.3.5, the results will be
discussed in further detail for one line-of-sight signal and multiple propagation
paths.
It has already been demonstrated that the ML solution can be treated un-
der the assumption of Gaussian noise as a complex LSQ adjustment problem
whose solution is described in Appendix A.1. As discussed in Section 4.2.3, we
expect the LSQ estimate to be optimal if the signal-to-noise ratio is high or if
a sufficiently large number of samples is considered. If neither condition is ful-
filled or if the noise is non-Gaussian, the LSQ estimate is still defined via (4.35)
but has to be treated as an engineering solution, which cannot be expected
to be optimal without further analysis. This is especially true because the sig-
nal model defined in Chapter 1 is highly nonlinear. Only if the linearization is
carried out sufficiently near or at the true parameter values will the LSQ so-
lution achieve the CRLB due to (4.17). The linearization will be discussed in
Section 4.3.2.10.
M M
rµ = rm; µ = am cm (t µ - τ m )exp{i(ω m t µ - ϕ m )} (4.44)
m =1 m =1
therefore
M M
rµ = am cm (t µ - τ m )exp{iω m (t µ - tmid )} = am cm (tµ - τ m )exp{iω m tµ } (4.46)
m =1 m =1
with
t1 + tL
t µ = t µ - tmid and tmid = (4.47)
2
q = ( a1 τ1 ω1 ... aM τM ωM )T (4.48)
It should be noted that, for the moment, we assume that the number of re-
ceived signals m is known from a priori; that is, however, not true because the
number of multipath signals, in particular, is difficult to determine. Estimation of
m will be touched upon later in Section 8.3.7. Furthermore, it should be men-
tioned that, for the moment, we treat all parameters as useful parameters (see
Section 4.1).
Analyzing (4.35), we obtain for the derivatives the following set of equations:
rµ rm; µ
= = cm (t µ - τ m )exp{iω m tµ }
am am
rµ rm; µ
= = - am cm (t µ - τ m )exp{iω m t µ } (4.49)
τm τm
rµ rm; µ
= = it µ am cm (tµ - τ m )exp{iω m tµ }
ωm ωm
These derivatives are used to set up the design matrix. Please note that the apos-
trophe denotes a derivate of a function c ( ... ) and it may also denote a separate
symbol t , as in (4.58). The first index (row) of the design matrix is denoted by m
and is used to index the samples; the second index m (the column index) is used to
enumerate the vector of unknowns.
To perform the LSQ adjustment, it should be noted that the vector of unknowns
is composed of the high-rate pseudorange parameters q. In the following, only the
symbol q is used for the unknowns (and not x, as in Appendix A.1).
The design matrix takes the form
The observations are given by the received signal samples S of (1.15), which are
modeled as complex-valued random variables, each (the real and imaginary compo-
nent) having unit variance as defined in (1.16). The samples are composed of the de-
terministic part rm and the thermal noise N, which is expressed in vector notation as
S= r+N Sµ = r µ + N µ (4.52)
The covariance matrix for the composite complex-valued observations is (see
Section 1.7)
QV ; µ , υ = (S µ - S µ )(S υ - S υ N
) = NµNυ = 2δ µ, υ
N N
N (4.53)
and the normal matrix I takes the form
A1*A1 A2* A1 �
1 * ÷ (4.54)
I = A*QV -1A = A1 A2 A2* A2 ÷
2 ÷
� �÷
The normal matrix has the same form as the Fisher information matrix because
Gaussian noise is assumed. Therefore, we choose the symbol I for the normal ma-
trix (and not N); this also distinguishes it from the noise symbol.
Because of the nonlinear signal model (4.46), the LSQ problem has to be solved
iteratively. The iterative solution of the LSQ problem starts with calculating the
derivates of (4.51) at a first linearization point q0 that shall be denoted as
q0 = ( a1,0 τ 1,0 ω 1,0 ... aM,0 τ M,0 ω M,0 )T (4.55)
The observation equations are linear in the amplitude, such that no iteration is
required for amplitude determination alone. However, iterations may be required
in Doppler and code-phase dimensions.
For a single iteration, the estimated corrections of the high-rate pseudorange
data read as
1
Dqˆ = I -1A*QV -1(S - r(q0 )) = I -1A*(S - r(q0 )) (4.56)
2
As mentioned in Appendix A.1, the estimates are complex-valued random
variables and, after computing, code phase, and Doppler are constrained to be
real-valued. The complex-valued estimated amplitudes shall be separated into the
real-valued amplitude and the estimated carrier phase after the adjustment. A real
valued signal model might seem to be better because it avoids the use of complex-
76 Signal Estimation
valued signal parameter, but Appendix A.1.4 shows that the separation can be
performed very easily if the parameters are uncorrelated.
Equation (4.56) represents a signal-processing technique that fully accounts for
signal correlations. Replica signals given by the term r(q0) are subtracted from the
received samples before correlating them with the reference signals given by (4.49).
This is equal to a parallel interference cancellation scheme [13]. Via the nondiago-
nal form of I, (4.56) accounts for correlations between the estimated parameters for
different transmitted signals.
For a further evaluation of this LSQ problem, we need to make specific assump-
tions on the received signals. In Section 4.3.2, a detailed analysis for the case of
m = 1 will be presented.
L L
rµ r µ
2I1,2 = 2I2,1 = = - a1,0 c1(t µ - τ 1,0 )c1 (tµ - τ 1,0 ) 0
µ =1
a1 τ 1 µ =1
L L
rµ rµ 2
2I1,3 = 2I3,1 = = -ia1,0 t µ c1(t µ - τ 1,0 ) 0
µ =1
a1 ω1 µ =1 (4.59)
L L 2
rµ rµ 2 2 2
2I2,2 = = a1,0 c1 (t µ - τ 1,0 ) a1,0 LRc ,c (0) = - a1,0 LRc1 ,c1 (0)
µ =1
τ1 τ1 µ =1
L L
rµ r µ 2
2I3,2 = 2I2,3 = = i a1,0 t µ c1(tµ - τ 1,0 )c1 (tµ - τ 1,0) 0
µ =1
ω1 τ 1 µ =1
L L
rµ rµ 2 2
2I3,3 = = a1,0 (t µ )2 c1(t µ - τ 1,0 )
µ =1
ω1 ω1 µ =1
2
2 L3 - L χ freq L3 a1,0
a1,0 χ freq
12 fs 2 12 fs 2
4.3 LSQ Correlators/Discriminators 77
or in matrix notation as
1 0 0
2
÷
2I L 0 -Rc1,c1 (0) a1,0 0 ÷ (4.60)
2
÷
0 0 χ freq a1,0 L2 (12 fs2 ) ÷
2 ÷
0 0 ÷
L ÷
2 ÷
I -1 0 2
0 ÷ (4.61)
-LRc ,c (0) a1,0 ÷
÷
24fs2 ÷
0 0 2 3÷
χ freqa 1,0L
Because I is a diagonal matrix, amplitude, code phase, and frequency estimates are
independent of each other. This is a consequence of the assumptions of Section 1.8.
From this point forward, all approximate equal signs will be replaced by
equal signs ‘=’ for the sake of simplicity.
1 L rµ
Daˆ1,0 = (Sµ - rµ(q0 ))
L µ =1 a1
1 L
= c1(t µ - τ 1,0 )exp{-iω1,0tµ }(S µ - a1,0c1(t µ - τ 1,0 )exp{iω 1,0tµ }) (4.62)
L µ =1
1 L
= c1(t µ - τ 1,0 )exp{-iω1,0t µ }S µ - a1,0
L µ =1
1 L
aˆ1,0 = a1,0 + Daˆ1,0 = c1(t µ - τ1,0 )exp{-iω1,0t µ }S µ (4.63)
L µ =1
78 Signal Estimation
L
-1 rµ
Dτˆ1,0 = 2
(Sµ - rµ(q0 ))
LRc1 ,c1 (0) a1,0 µ =1
τ1
a1,0
= 2
LRc1 ,c1 (0) a1,0
L
c1 (t µ - τ 1,0 )exp{-iω1,0t µ }(S µ - a1,0c1(t µ - τ1,0 )exp{iω1,0t µ }) (4.64)
µ =1
L
1 1
= c1 (t µ - τ 1,0 )exp{-iω1,0t µ }S µ - Rc ,c1
(0)
1
LRc1 ,c1 (0)a1,0 µ =1 LRc ,c (0)
L
1
= c1 (t µ - τ 1,0 )exp{-iω1,0t µ }S µ
LRc1 ,c1 (0)a1,0 µ =1
L
12 fs rµ
Dωˆ 1,0 = 2
(Sµ - r µ(q0 ))
χ freq a1,0 L3 µ =1
ω1
L
-i12 fs
= t µ c1(t µ - τ 1,0 )exp{-iω1,0t µ }(S µ - a1,0c1(t µ - τ1,0 )exp{iω1,0t µ })
χ freq a1,0 L3 µ =1
L
i12 fs
=- t µ c1(t µ - τ 1,0 )exp{-iω1,0tµ }S µ + (4.65)
χ freq a1,0 L3 µ =1
L
-i12 fs
+ t µ c1(t µ - τ 1,0 ) c1(t µ - τ 1,0 )
χ freq L3 µ =1
L
i12 fs
=- t µ c1(t µ - τ 1,0 )exp{-iω1,0t µ }S µ
χ freq a1,0 L3 µ =1
The estimated frequency and code phase after the first iteration are given by
4.3.2.2 Iteration
Due to the nonlinear signal model (4.46), it is, in general, necessary to repeat a LSQ
procedure until the estimated parameters converge. The iteration can be written for
k > 0 as
At this point it should be noted that, after the first iteration, the a priori values
become themselves random variables because they depend on the signal samples.
This will finally result in what is referred to as “squaring loss” and will be discussed
in detail below.
Formally, the parameter iteration is written as
2 2(C / N0 )1
a1 = (4.69)
fs
2
var Daˆ1,0 N
= (4.70)
L
2 fs
var Dτˆ1,0 N
= 2
=
-LRc1 ,c1 (0) a1,0 -LRc1 ,c1 (0)(C / N0 )1
(4.71)
1
=
-Tcoh Rc1 ,c1 (0)(C / N0 )1
where the coherent integration time Tcoh is the ratio of the number of samples L
divided by the sample rate fs. Note that this variance applies to the complex-valued
4.3 LSQ Correlators/Discriminators 81
1 (4.72)
var Re{Dτˆ1,0 } N
=
-2TcohRc1,c1 (0)(C / N0 )1
Similarly, we obtain for the Doppler variance in square radians per second
squared
6
var Re{Dωˆ1,0 }2 = 3
(4.73)
N Tcoh (C / N0 )1 χ freq
1 1
JCRLB (Re{Dτˆ1} ) = p((C / N0 )1)d((C / N0 )1) (4.74)
-2TcohRc1,c1 (0) (C / N0 )1
where p((C / N0)1) is the a priori probability distribution for the signal-to-noise
ratio. In contrast to the MCRLB or the ACRLB, the JCRLB depends not only on
the mean signal-to-noise ratio, but also on its distribution. The less stringent the
distribution, the larger the difference between the JCRLB and the MCRLB. If (C /
N0)1 is from N(C, σ 2), then for small values of σ 2,
1 σ2
JCRLB(Re{Dτˆ1 }) 1+ ÷ (4.75)
-2Tcoh Rc1 ,c1 (0)C C2
LSQ adjustment is iterated and in each step the design matrix is recalculated using
the signal amplitude estimate from the last step. The influence of code-phase errors
and Doppler errors on the design matrix is not considered, and the initial values t1,0
and w1,0 are retained. Their influence will be discussed later in Section 4.3.2.10. The
iteration starts at any nonzero value of the amplitude.
The simplified iteration procedure converges after the first iteration because the
observation equation for the complex signal amplitude is linear.
Let us examine (4.64), which describes the code-phase improvement. After the
iteration has converged, this equation will take the form
L
1
Dτˆ1,* = c1 (t µ - τ 1,0 )exp{-iω1,0tµ }S µ (4.76)
LRc1 ,c1 (0)aˆ1,* µ =1
where the subscript “*” denotes the converged values. The initial value for the
complex amplitude a1,0 has been replaced by the converged estimated amplitude
value â1,*.
To calculate the variance of the code-phase estimate obtained with the con-
verged amplitude estimate, we rewrite (4.76) as
L
a1 1 a1
Dτˆ1,* = c (t µ - τ 1,0 )exp{-iω1,0t µ }S µ = Dτˆ1,0 (4.77)
aˆ1,* LRc ,c (0)a1 µ =1
aˆ1,*
1 1
The estimated signal amplitude and the code-phase improvement are uncor-
related, as can be seen from (4.60). Because the expected value of the code-phase
improvement vanishes, the following equation holds
a1 a1
D τˆ1,* = D τˆ1,0 = D τˆ1,0 = 0 (4.78)
aˆ1,* aˆ1,*
Note that now and for the rest of the squaring loss discussion, the subscript N
is omitted from the expectation values and all expectation values are understood
with respect to the thermal noise.
For the variance we obtain
2 2
2 a1 a1 2
D τˆ1,* = D τˆ1,0 = D τˆ1,0 (4.79)
aˆ1,* aˆ1,*
The first term of the product in the above equation is the squaring loss. If
the estimated value of the signal amplitude is precise (i.e., â1,*= a1), the squaring
loss will be 1 (= 0 dB). This stands in analogy to the JCRLB of Section 4.3.2.5.
4.3 LSQ Correlators/Discriminators 83
For the general case (i.e., â1,* a1 ), we perform a Taylor series expansion
as
2
2 2 2
a1 a1 1 (aˆ1,* - a1)
= = 1-
aˆ1,* (aˆ1,* - a1) + a1 (aˆ1,* - a1) a1
+1
a1
(4.80)
2
(aˆ1,* - a1) (aˆ1,* - a1) var aˆ1,*
= 1 - 2 Re + = 1+ 2
a1 a1 a1
2
a1 fs 1 (4.81)
= 1+ = 1+
ˆa1,* L(C / N0 )1 Tcoh (C / N0 )1
The squaring loss decreases with increasing C/N0 and with an increasing coher-
ent integration time Tcoh.
Finally, the variance of the LSQ code-phase estimate (i.e., the real part of the
complex-valued random variable) evaluates to
2
2 1 2 a1
Re{Dτˆ1,*} = Dτˆ1,0
2 aˆ1,*
(4.82)
1 1
= 1+
-2Tcoh Rc1 ,c1 (0)(C / N0 )1 Tcoh (C / N0 )1 ÷
The first part of this equation is equal to the code-phase CRLB (without nui-
sance parameters), the second is the squaring loss. Similar equations are derived
elsewhere [16, 17].
2
2 a1 2
Dωˆ 1,* = Dωˆ 1,0 (4.83)
aˆ1,*
84 Signal Estimation
12 fs 2 1
Re{Dωˆ 1,* }2 = 1+
3 2
L a1,0 χ freq Tcoh (C / N0 )1 ÷
(4.84)
3
6 fs 1 6 1
= 1+ ÷ = 1+
3
L (C / N0 )1 χ freq Tcoh (C / N0 )1 3
Tcoh (C / N0 )1 χ freq Tcoh (C / N0 )1 ÷
Again, the Doppler variance is given by the product of the CRLB multiplied by
the squaring loss. The factor χfreq measures the nonuniformity of the signal-power
distribution in time and is defined in Section 1.8.3. The CRLB obtained here cor-
responds to equation (16) of the article by Rife and Boorstyn under the assumption
that the phase is unknown [18].
aˆ1,*,im
ϕˆ 1,* = tan -1
aˆ1,*,re ÷
(4.85)
ε aˆ1,*,im aˆ1,*,im aˆ1,*,im (aˆ1,*,re - a1,re )
= tan -1 ε + ε2
ε(aˆ1,*,re - a1,re ) + a1,re ÷ a1,re a1,re 2
The parameter e is a pure auxiliary parameter for the Taylor series expansion.
It tags the signal-amplitude estimation errors and is used to collect terms that are
proportional to the estimation errors, terms that are proportional to the squared
estimation errors, and so on. After performing the series expansion, the parameter
e is ignored (set to 1) and we obtain
Real and imaginary signal amplitudes are unbiased and uncorrelated random
variables, thus
ϕˆ1,* 0 (4.87)
4.3 LSQ Correlators/Discriminators 85
2
aˆ1,*,im aˆ1,*,im2 (aˆ1,*,re - a1,re ) aˆ1,*,im2 (aˆ1,*,re - a1,re )2
= + 2 +
a1,re ÷ a1,re3 a1,re 4
2
var aˆ1,* var aˆ1,* var aˆ1,* var aˆ1,*
= + = 1+ ÷ (4.88)
2 a1
2
4 a1
4
2 a1
2
2 a1
2 ÷
1 1 fs fs
= 1+ ÷ = 1+
L a1
2
L a1
2÷ 2L(C / N0 )1 2L(C / N0 )1 ÷
1 1
= 1+
2Tcoh (C / N0 )1 2Tcoh (C / N0 )1 ÷
Note that the following identify holds for a complex-valued random variable
var aˆ1,*
aˆ1,*,im2 = (aˆ1,*,re - a1,re )2 = (4.89)
2
whose expected imaginary part vanishes. Furthermore, all random variables in-
volved in this expression are Gaussian due to the large number of involved samples.
Uncorrelated Gaussian random variables are stochastically independent.
Again, a term similar to the squaring loss enters because, in general, aˆ1,*,re a1,re.
Otherwise, the second-order term in (4.85) would vanish.
Equation (4.88) gives the variance of a carrier-phase estimate, being the prod-
uct of the carrier-phase CRLB and the squaring loss. This equation also shows that
the carrier-phase variance expressed in square radians is independent of the carrier
frequency or the waveform c(t). An identical expression to (4.88) was derived in
[16] using a different methodology and using a Taylor series expansion up to the
third order.
2
bˆ1,*2 = aˆ1,* = aˆ1,*,im2 + ((aˆ1,*,re - a1,re ) + a1,re )2
(4.90)
= aˆ1,*,im2 + 2a1,re (aˆ1,*,re - a1,re ) + a1,re 2 + (aˆ1,*,re - a1,re )2
We assume (without loss of generality) that the imaginary true value a1,im of â1,*
vanishes. The squared signal-magnitude estimate evaluates to
86 Signal Estimation
2 2 var aˆ1,*
bˆ1,*2 = a1 + var aˆ1,* = a1 1 + ÷
a1
2 ÷
(4.91)
2 1
= a1 1+
Tcoh (C / N0 )1 ÷
2(C / N0 )1 1 2(C / N0 )1 2
bˆ1,*2 = 1+ = +
fs Tcoh (C / N0 )1 ÷ fs fsTcoh
(4.92)
fs 2
bˆ1,*2 - ÷ = (C / N0 )1
2 fsTcoh
f 2 2
(�
C / N0 )1 = s aˆ1,* - ÷ (4.93)
2 fsTcoh
f2 2 f2
var (�
C / N0 )1 = s var aˆ1,* = s var bˆ1,*2 (4.94)
4 4
The variance of b̂1,*2 shall be evaluated in the following by expanding the com-
plex-valued random variable into its real- and imaginary-valued components
(aˆ )
2
var bˆ1,*2 = 1,*,re
2
+ aˆ1,*,im2 - aˆ1,*,re 2 + aˆ1,*,im2
( )
2
= (aˆ1,*,re - a1,re )2 + 2aˆ1,*,re a1,re + aˆ1,*,im2 - (aˆ1,*,re - a1,re )2 + 2aˆ1,*,re a1,re + aˆ1,*,im2
((aˆ )
2
= 1,*,re - a1,re )2 + 2a1,re (aˆ1,*,re - a1,re ) + aˆ1,*,im2 - (aˆ1,*,re - a1,re )2 + aˆ1,*,im2
(4.95)
Because for any zero-mean Gaussian random variable any moment of uneven
order vanishes, and because the real and imaginary part of the complex-amplitude
estimate â1,* are uncorrelated, we can rewrite the equation as
4.3 LSQ Correlators/Discriminators 87
var bˆ1,*2
2
= ((aˆ
1,*,re - a1,re )2 + 2a1,re (aˆ1,*,re - a1,re ) + aˆ1,*,im2 - (aˆ1,*,re - a1,re )2 - aˆ1,*,im2 )
= ( (aˆ 1,*,re - a1,re )2 - (aˆ1,*,re - a1,re )2 + 2a1,re (aˆ1,*,re - a1,re )
(4.96)
)
2
2 2
+ aˆ1,*,im - aˆ1,*,im
2
= (aˆ1,*,re - a1,re )4 - (aˆ1,*,re - a1,re )2 + 4a1,re 2 (aˆ1,*,re - a1,re )2
2
+ aˆ1,*,im 4 - aˆ1,*,im2
n -2
n/2 1+ n
2 2 (1 + (-1)n ) x2 G ÷
2 (4.98)
xn =
π
which holds for any real-valued zero-mean Gaussian random variable, yields
2 2
var bˆ1,*2 = 2 (aˆ1,*,re - a1,re )2 + 4a1,re 2 (aˆ1,*,re - a1,re )2 + 2 aˆ1,*,im2 (4.99)
2 1 2 4 8(C / N0 )1
var bˆ1,*2 = 2 + 4a1,re 2 + 2 = 2 + (4.100)
L L L L Lfs
f 2 4 8(C / N0 )1
var (�
C / N0 )1 = s + ÷
4 L2 Lfs
(4.101)
2
fs 2L(C / N0 )1 1
= 1+ ÷= (1 + 2Tcoh (C / N0 )1)
L2 fs Tcoh2
88 Signal Estimation
(�
C / N0 )1 2 1
var = 1+ (4.102)
(C / N0 )1 (C / N0 )1Tcoh 2Tcoh (C / N0 )1 ÷
is derived from (4.101). It shows the same structure as for the code-phase, Doppler,
and carrier-phase estimate; the term within the parenthesis could be interpreted as
a squaring loss.
and achieves the CRLB. However, this estimator cannot be used in practice and the
normal matrix and the signal model is set up with different parameters that shall be
denoted now as q0. The assumed parameter values differ from the true values by
δ q0 = q0 - q (4.104)
1 -1 *
qˆ 0 = q0 + Dqˆ 0 = q0 + I Aq0 (S - r(q0 )) (4.105)
2
4.3 LSQ Correlators/Discriminators 89
Aq*r(q) = 0 (4.106)
1 -1
0 = qˆ 0 - qˆ = q0 + Dqˆ 0 - q - Dqˆ = δ q0 + I (Aq0 *S - Aq*S) (4.107)
2
1 -1 *
δ q0 = I (Aq (r(q) + N) - Aq0 * (r(q) + N))
2
(4.108)
1
= I -1(Aq* N - Aq0 * (r(q) + N))
2
Taking the expected value with respect to the noise, we obtain the linearity
condition
1
δ q0 = - I -1Aq0 *r(q) (4.109)
2
Now we will prove that (4.111) is also fulfilled if (4.110) is fulfilled. We show
that the variance of (4.111) vanishes because the derivative of (4.110) with respect
to q yields
Therefore,
Altogether, this shows that, as long as the linearity condition (4.110) is fulfilled,
code-phase and Doppler errors in the setup of the design matrix do not degrade
the estimation results and (4.107) is fulfilled. The discussion can also be seen as an
interpretation of (4.17) in the context of LSQ adjustment.
An explicit expression for the linearity condition is derived by an analysis of
the product of the design matrix with the deterministic signal model. Because the
complex amplitude is assumed to be known, the product reads as
L
rµ (q)
q = q0 rµ (q)÷
µ =1
τ1 ÷
Aq0 * r(q) = ÷
L
rµ (q) ÷
q = q0 rµ (q)
µ =1
ω1 ÷
L
2
- a1 c1 (t µ - τ 1,0 )c1(t µ - τ 1)exp{itµ (ω1 - ω1,0 )} ÷
µ =1 ÷ (4.114)
= ÷
L
t µ c1(t µ - τ 1,0 )c1(tµ - τ1)exp{it µ (ω1 - ω1,0 )}÷
2
-i a1
µ =1
÷
2
a1 LRc1 ,c1 (τ 1,0 - τ 1)κ (ω1 - ω1,0 )
÷
= 2
L
÷
-i a1 t µ c1(t µ - τ 1,0 )c1(t µ - τ 1)exp{it µ (ω1 - ω1,0 )}÷
µ =1
and we assume that the normal matrix I is set up using the true complex signal-
amplitude value. Setting both equations equal yields two equations: the code-phase
linearity condition and the Doppler linearity condition.
The code-phase linearity condition is a linearity requirement for the first deri-
vate of the correlation function, written as
Rc1 ,c1 (0)(τ 1,0 - τ 1) = Rc1 ,c1 (τ 1,0 - τ1)κ (ω 1 - ω 1,0 ) (4.116)
L
χ freq L3 (12 fs2 )(ω 1,0 - ω1) = i t µ c1(t µ - τ1,0 )c1(t µ - τ1)exp{it µ (ω 1 - ω 1,0 )} (4.117)
µ =1
The mathematical model for the “Single LSQ step” is based on the consider-
ations of Section 4.3.2, which can be summarized for a single estimation step as
where eDq̂ is the single-step high-rate pseudorange estimation error. The high-rate
pseudorange estimates are used to define the correlation point for the next interval
q0 = qˆ (4.121)
The estimation is performed exactly once per integration interval. Applying the
z-transform to analyze the temporal evolution of the tracking loop over multiple
intervals yields
zq0 (z) = H NCO (z)H Loop (z)Dqˆ (z) + q0 (z) (4.122)
H Smooth (z)H NCO (z)HLoop (z)q(z) H Smooth (z)H NCO (z)HLoop (z)ε Dqˆ (z)
pˆ(z) = + (4.124)
(z - 1 + H NCO (z)HLoop (z)) (z - 1 + H NCO (z)HLoop (z))
For many applications, the loop filter can be chosen such that no smoothing
filter is required [e.g., HSmooth(z) = 1] because the requirement of obtaining smooth
estimates is often much more stringent than the linearity condition (4.116) and
(4.117). In other words, the accuracy of the measurements is much higher than the
extension of the linearity region. An important exception is the case of occasional
high line-of-sight dynamics. In that case, it is beneficial to design the loop filter with
a wide bandwidth and the smoothing filter with a narrow bandwidth. This avoids
loss-of-lock events during periods of high dynamics while maintaining accurate
estimates during normal tracking. High line-of-sight dynamics may occur during
acquisition-to-tracking handover, short blockage of the signals, or during specific
user events. A smoothing filter is used in a patent by Thomas to improve the accu-
racy of the low-rate pseudorange estimates in case the low-rate pseudorange output
rate is much slower (e.g., 30 seconds) than the inverse of the involved tracking-loop
bandwidths (e.g., 1 Hz) [21].
The NCO-model HNCO(z) is introduced in Figure 4.2 because, in many practi-
cal implementations, the correlation-point update capabilities are limited. For ex-
ample, in some GNSS receivers, only the rate of change of q0 can be controlled,
but not q0 directly. Typically HNCO(z) equals the coherent integration time Tcoh,
indicating that the rate of change of q0 muliplied with Tcoh is added to the previous
value of q0 [16].
94 Signal Estimation
A number of different possibilities exist to realize the loop filter. For the ex-
ample of a GNSS receiver, Jaffe–Rechtin filters are often used [22] and are described
in many textbooks [15, 20, 23, 24]. Jaffe–Rechtin filters optimize the sum of ther-
mal noise plus transient errors and they require only the rate of change of q0 to be
controlled.
Kaplan describes tracking-loop stability conditions that are formulated based
on similar linearity requirements such as (4.116) and (4.117) [20]. Those conditions
transfer the linearity requirement into allowable signal parameters (e.g., maximal
dynamics, signal-to-noise ratio) that must be met to keep the correlation point near
the true values. Because of the feedback, a tracking loop may become unstable for
an unfortunate choice of the loop filter. A detailed investigation on this topic has
been done by Eissfeller and Kazemi [25, 26]. A more recent approach is to realize
the loop filter via a Kalman filter. The effective transfer function stands in very
close relationship with a Jaffe–Rechtin filter, but the Kalman filter is optimal under
certain assumptions. The Kalman filter approach requires direct control of q0, not
only of the rate of change of q0 [27].
The variance of a scalar low-rate pseudorange parameter p is given as [16]
2π 2
1 H Smooth (eiϑ )H NCO (eiϑ )HLoop (eiϑ )
var pˆ N = var ε Dqˆ N dϑ
2π ϑ = 0 (eiϑ - 1 + H NCO (eiϑ )HLoop (eiϑ )) (4.125)
2π 2
1 H Smooth (eiϑ )H NCO (eiϑ )H Loop (eiϑ )
BL = dϑ (4.126)
4π Tcoh ϑ =0 (eiϑ - 1 + H NCO (eiϑ )H Loop (eiϑ ))
Note that [16] additionally considered rate-of-change variation within the cor-
relation process.
for a given a priori range L of admissible values for q. The algorithm may consist of
two main steps, which are executed for each integration interval:
am
Dτ m = am Dτ m (4.130)
Dωm = am Dωm
then
L L
k,l rµ rµ
2I1,1 = = ck (t µ - τ k,0 )cl (t µ - τ l ,0 )exp{it µ (ω l ,0 - ωk,0 )}
µ =1
ak al µ =1
L L
k,l rµ r µ
2I1,3 = = ial ,0 t µ ck (t µ - τ k,0 )cl (t µ - τ l ,0 )exp{it µ (ω l ,0 - ωk,0 )}
µ =1
ak ω l µ =1
and
L
k,l rµ rµ
2I2,2 = =
µ =1
τ k τl
L
= ak,0 al ,0 ck (t µ - τ k,0 )cl (t µ - τ l ,0 )exp{it µ (ω l ,0 - ω k,0 )}
µ =1
L
= iak,0 al ,0 t µ ck (t µ - τ k,0 )cl (t µ - τ l ,0 )exp{it µ (ω l ,0 - ω k,0 )} (4.133)
µ =1
L
= ak,0 al ,0 (t µ )2 ck (t µ - τ k,0 )cl (t µ - τ l ,0 )exp{it µ (ωl ,0 - ωk,0 )}
µ =1
In this case, the nondiagonal submatrices Ik,l of the Fisher information matrix
evaluate to
2I k,l = L
Rck ,cl (Dτ )κ (Dω) -al ,0Rck ,cl (Dτ )κ (Dω) γ a,ω
÷
ak,0Rck ,cl (Dτ )κ (Dω) -ak,0 al ,0Rck ,cl (Dτ )κ (Dω) γ ω ,τ ÷
÷
γ a,ω γω,τ -ak,0 al ,0Rck ,cl (Dτ )κ (Dω) ÷÷
(4.134)
with
Dω = ω l ,0 - ω k,0
(4.135)
Dτ = τ k,0 - τ l ,0
The matrix Ik,l can be simplified if Doppler estimates decouple from code phase
and amplitude estimates. This is the case if we assume
which implies that the matrix Ik,l is block diagonal. The Doppler block is de-
coupled from the code-phase and amplitude block. As a consequence, the Dop-
pler parameter estimates are independent of the code-phase and amplitude
estimates, because the submatrices (4.60) of the Fisher information matrix are
diagonal.
The complete Fisher information matrix can be written in block diagonal
form as
I a,τ 0
I= ÷ (4.137)
0 Iω
The order of estimated parameters is ak, tk, al, tl, wk, wl if two propagation
paths are considered.
For many applications, (4.136) is a reasonable assumption because the Dop-
pler differences between the direct and the reflected signals are much smaller than
the inverse of the coherent integration time [this implies k′(Dw) = 0]. For example,
Doppler differences of signals received by a static GNSS antenna are caused by the
satellite motion and are of the order of (5–10 min)–1. For a moving receiver, the
Doppler difference is bounded by 2 fRF vUSER / c, which evaluates to 10.5 Hz on
GPS L1 for a pedestrian moving at 1 m/s. On the other hand, typical integration
times range from 1 to 20 ms.
(4.138)
4.3 LSQ Correlators/Discriminators 101
ϑ = κ (ω l ,0 - ω k,0 ) (4.139)
which assumes a value of 1 if both signals have the identical Doppler frequency.
The determinant D of Ia,τ divided by the squared magnitude of both signals is
obtained via a symbolic mathematics program as
2
2ϑ 2 Rc ,c (D τ) Rc ,c (0) ÷
÷
÷
2 2
÷ (4.140)
D = +ϑ 4 Rc ,c (D τ) Rc ,c (Dτ ) + 2Rc ,c (Dτ )Rc ,c (Dτ )÷ ÷
÷
2
÷
+θ Rc ,c (0)2 - ϑ 2 Rc ,c (D τ) ÷ ÷
÷
and the submatrix of the inverse Fisher information matrix Ia,τ corresponding to the
parameters (ak, τk) of the signal k is given as
2
(I a,τ -1)k =
LD
( )
2 ÷
ϑ 2 Rc ,c (Dτ ) Rc ,c (Dτ ) - Rc ,c (0)Rc ,c (Dτ ) 2
Rc ,c (0)η + ϑ Rc ,c (Dτ ) ÷
- ÷
ak,0 2 ÷
ak,0
(4.141)
where the constant
2
η = 1 - ϑ 2 Rc ,c (Dτ ) (4.142)
has been introduced to simplify the notation. The submatrix corresponding to the
signal parameters l is identical (apart from exchanging k and l). Couplings between
parameters for the signal path k and the path l shall not be considered here.
The CRLB (without nuisance parameters) for the code-phase estimate tk of the
first signal k is being determined by the second diagonal term of (Ia,τ –1)k. The CRLB
reads as
2
2
1 Rc ,c (0)η + ϑ Rc ,c (Dτ )
CRLB(Re{τ k }) = - 2
L ak,0 D
(4.143)
2
Rc ,c (0)η + ϑ 2 Rc ,c (Dτ)
=-
2(C / N0 )k Tcoh D
102 Signal Estimation
It should be noted that the CRLB for the signal k is independent of the signal
strength or phase of the signal l. However, it depends on the code phase difference.
For a further understanding of this statement, the reader is invited to compare the
multipath mitigating and the multipath estimating discriminators in Chapter 8.
Only the latter one has a multipath power independent performance.
r Dτ 2
Rc ,c (Dτ ) = 1 + Dτ � 1 (4.144)
2
Then, (4.143) reads as
1 ϑ 2 (4 + r 2 Dτ 4 ) - 4
CRLB(Re{τ k }) (4.145)
2
L ak,0 r(ϑ 2 - 1)(ϑ 2 (rϑ 2 - 2)2 - 4)
For identical Doppler (J = 1) this expression is singular for all code-phase dif-
ferences Dt as long as (4.144) holds. Thus, no unbiased estimator exists that jointly
estimates all parameters of both signals if (4.144) holds and if both signals have the
identical Doppler frequency. It should be mentioned that this does not exclude the
existence of any other (not jointly estimating) unbiased delay estimator that directly
estimates tk and treats the l-parameters as nuisance parameters.
2 2
Rc ,c (0)(1 - Rc ,c (Dτ ) ) + Rc ,c (Dτ )
4 2 2 2
Rc ,c (Dτ ) + 2 Rc ,c (Dτ ) (Rc ,c (0) - Rc ,c (Dτ )Rc ,c (Dτ )) + 1 - Rc ,c (D τ) )(Rc ,c (0)2 - Rc ,c (Dτ ) ÷ ÷
(4.146)
This equation was also derived in publication by Ávila Rodríguez and it was
analyzed for different GNSS signals [37].
For further analysis, a parametric model for any correlation function in the
vicinity of its origin shall be defined by
r Dτ 2 r D τ4
1+ - 2
… Dτ < m
2 12 m
Rc ,c (Dτ ) = (4.147)
m2r 2rm Dτ 3m 3
1- + … m Dτ -
4 3 8 2mr
4.3 LSQ Correlators/Discriminators 103
The correlation function is linear for |Dτ | being larger than the parameter
m. The peak region is modeled by an even fourth-order polynomial. The pa-
rameter r represents the second derivative of the autocorrelation function at the
origin.
For |Dτ | < m, the code-phase CRLB of the signal k is given for the correlation
function model (4.147) as
1 9m 4
CRLB(Re{τ k }) - (4.148)
L ak,0 2 r Dτ 4
The CRLB is nonsingular due to the fourth-order term, but strongly diverges as
Dt approaches zero.
1 Rc ,c (0)
CRLB(Re{τ k }) = - 2
(4.149)
L ak,0 (Rc ,c (0)2 - Rc ,c (Dτ )2 )
m4
… Dτ < m
CRLB(Re{τ k }) = -
1 r Dτ (2m2 - D τ 2 )
2
(4.150)
Lak,02 1 3m 3
… m Dτ -
r 8 2mr
We see that, if both signals are separated by a delay large enough such that
the correlation function becomes linear, then the delay estimates achieve the same
CRLB as in the case of a single signal. If the complex amplitude is known, the
singularity is less severe (second order instead of fourth order) than for joint es-
timation of the code-phase and complex amplitude. This suggests that the cou-
pling between the code-phase and the complex-amplitude estimates affect (4.146)
significantly.
104 Signal Estimation
The Doppler CRLB for the signal k for joint estimation of both Doppler pa-
rameters reads as
24fs 2L 1
CRLB(ωk ) = (4.152)
χ freq ak,0
2
( 2 4 4
χ freq L - 144fs κ (D ω) Rc ,c (Dτ )
2 2
)
where the Doppler is still considered as a complex random variable. Constraining
it to be real-valued gives
12 fs 2L 1
CRLB(Re{ωk }) =
χ freq ak
2
(χ freq
2 4 4 2
L - 144fs κ (D ω) Rc ,c (D τ )
2
)
12L 1
=
χ freq ak fs
2 2
( 42
χ freq Tcoh
2
- 144 κ (Dω) Rc ,c (Dτ )
2
) (4.153)
6Tcoh 1
=
(
χ freq (C / N0 )k χ 2T 4 - 144 κ (Dω) 2 R , (Dτ ) 2
freq coh cc )
For identical Doppler values (i.e., J = 1) and identical code-phase values, this
2
expression diverges since κ (Dω) - χ freqTcoh 12.
Dω = 0
L
I�a,τ = I a, τ ϕk,ϕl
=
2
1 0 ϑ Rc ,c (Dτ ) 0
2 ÷
0 -Rc ,c (0) ak,0 0 0 ÷ (4.154)
÷
ϑ Rc ,c (Dτ ) 0 1 0 ÷
2÷
0 0 0 -Rc ,c (0) al ,0 ÷
Both use the a priori probability density function p(Dt) of the multipath delay.
The probability density function often has the form of
1 Dτ
p(Dτ ) ∼ exp - (4.157)
d d
for delays Dτ > 0. Negative delays do not occur.
This delay distribution is, however, insufficient to compensate for the singular-
ity of (4.148) at Dt = 0. Therefore, the JCRLB diverges. By contrast, the ACRLB
will assume a high but finite value.
106 Signal Estimation
The LSQ estimator discussed in Section 4.3 is based on correlating the received
signal samples with derivatives of the signal model. Provided that the correlation
point is near the true values, only a few values (the correlation values) are sufficient
to determine the parameter estimates. The correlation values represent compressed
information and act as data reduction from the large amount of samples to a few
values.
This particular data-reduction method generally applies to many estimation
techniques (MLE and Bayesian techniques) and is based on the definition of a suffi-
cient statistic. The goal of a sufficient statistics is to obtain identical estimates either
from working directly with the signal samples or from working with the sufficient
statistic values.
In the following section, this method will be discussed on the example of high-
rate pseudorange estimation from a given set of samples s, described by the prob-
ability density function pq(s). In contrast to the preceding sections, the case of
nonwhite noise is considered.
1 1 L 2
pq (s) = L
exp - sµ - rµ(q) (4.159)
(2π ) 2 µ =1
Nµ N υ = 0, N µ N υ = 0, N µ N υ = 2Qµ, υ (4.161)
N N N
1 1 1
- (s* - r(q)* )Q-1(s - r(q)) = - s*Q-1s - r(q)* Q-1r(q) + Re{r(q)* Q-1s} (4.162)
2 2 2
Only the last term of the above expression is relevant for the factorization theo-
rem, as the first term is independent of the parameters q (and thus goes into h) and
the second term (being independent of s) can be trivially part of any function of gq.
In the following section, two approximations are discussed that are commonly used
to express the last term with the help of properly defined statistics.
where
Tb (s) = r(qb )* Q-1s (4.164)
is the multidimensional test statistics evaluated via correlation of the signal model
with the signal samples at selected control points qb (plus accounting for the noise
covariance). The functions αb(q) are the interpolation coefficients.
The signal model (4.46) is linear in the complex amplitude. Therefore, the in-
terpolation in the complex-amplitude dimension is trivial and the grid spanning all
admissible values is effectively two-dimensional. It extends in the code-phase and
Doppler dimension.
For the sake of simplicity, this expansion shall be carried out only in the first order,
but the generalization to higher-order terms is obvious.
The last term of (4.162) is approximated as
r(q)*
r(q)* Q-1s r(q0 )* Q-1s + (q - q0 )* q = q0 Q-1s
q
(4.165)
*
r(q)
= r(q0 )* Q-1s - q0* q = q0 Q-1s + q*T (s)
q
r(q)*
T (s) = q =q0 Q-1s (4.166)
q
The first two terms of (4.165) are independent of q and thus go into h.
The test statistics of the first-derivative approach relies on the correlation of the
received samples with the first derivates of the signal model (and accounts for the
noise covariance). The derivative of the signal model (4.46) with respect to the com-
plex amplitude yields a correlation with the signal model itself, which will define the
P-correlator. The derivatives with respect to the code phase and with respect to the
Doppler define the D- and the F-correlators. All three correlators will be discussed
in Section 7.3.
In the Bayesian approach, the estimated parameters (for example, the high-rate
pseudorange parameters q) are assumed to be random quantities related statisti-
4.5 Bayesian Approach 109
cally to the observations. The parameters are endowed with an a priori probability
density function w(q), which is defined, even with the lack of actual observations.
By contrast, the nonrandom-parameter estimation approach introduced in Section
4.2 does not require any a priori distribution of the parameters.
The second important difference between nonrandom-parameter estimation
and the Bayesian approach is the use of a cost function in the latter case. The cost
function measures the cost of estimating the true value q as q̂; the more incorrect
the estimate, the higher the cost. A Bayesian estimate minimizes the cost function,
averaged over all admissible values for q. Bayesian estimates are not necessarily
unbiased. ML estimation can be seen as a special case of Bayesian estimation for
the case of a uniform distribution w(q) and a cost function that assumes 0 for qˆ = q
and otherwise one (see chapter IV.D in the book by Poor [1]).
The Bayesian approach is generally less popular than the nonrandom param-
eter-estimation approach for the design of navigation signal-processing algorithms,
probably because of two facts. First, the a priori probability density function w(q)
is often hard to determine. Second, the coupling between w(q) and the estimated
parameters complicates the determination of the statistical distribution of the esti-
mated parameters. An important exception to this is the case of linear observations
and a linear dynamical model with Gaussian noise. The resulting Bayesian estima-
tor—the Kalman–Bucy filter—yields explicit expressions for the estimates and their
probability density functions.
In the following, minimum mean-squared error estimation will be introduced;
the estimator will be expressed as a function of the sufficient statistics of Section
4.4, regardless of the form of the underlying probability density functions or the
relationship between parameters and observations. The necessary assumptions
to apply the Kalman–Bucy filter theory, as well as a block diagram for a Kal-
man filter based on the sufficient statistics of Section 4.4, will be discussed in
Section 6.5.2.
2 2
R(qˆ , s) = qˆ - q qs= qˆ - q p(q s)dq (4.169)
q L
and evaluates according to Case IV.B.4 in the book by Poor to the conditional ex-
pected value of the parameters itself [1]; that is,
110 Signal Estimation
qpq (s)w(q)
qˆ(s) = qp(q s)dq = dq (4.171)
q L q L
pq (s)w(q)dq
q L
In the last step, the conditional probability p(q | s) is expressed using the sample
probability density function pq(s) and the a priori probability density function w(q)
using the Bayes theorem.
From Section 4.4, it is clear that for fixed observed samples s, the probability
density function pq(s) can be obtained via a sufficient statistics T(s). Therefore, af-
ter the data reduction from the signal samples to the statistics has been performed,
the evaluation of (4.171) requires averaging over all parameter values without,
however, performing further correlations. The MMSE parameter estimates can, for
example, be obtained from the P-, D-, and F-correlator values introduced in Chap-
ter 7, and the integral
qgq (T (s))w(q)
qˆ(s) = dq (4.172)
q L
gq (T (s))w(q)dq
q L
1. Gaussian noise;
2. Gaussian a priori parameter distribution;
3. Linearity conditions fulfilled or linearized signal model.
has to cope with. In the case of GNSS signals, a linearized signal model of Section
4.3.4 is more difficult to obtain than fulfilling the linearity conditions of Section
4.3.2.10. If full linearization (including products with the complex-valued ampli-
tude) of the signal model is not possible, but the linearity conditions are fulfilled,
the LSQ estimation scheme or a noncoherent Kalman filter (see next Section 4.5.4)
are still optimal. If the linearity conditions are also not fulfilled, nonlinear filters
might be used.
Nonlinear filtering for navigation signal processing is a topic of ongoing re-
search. A tutorial on nonlinear and non-Gaussian filters was written by Arulam-
palan [40]. Nonlinear filters are Bayesian filters and use Monte Carlo methods to
evaluate Bayesian estimation integrals like (4.171). Particle filters have been used
by Closas for static multipath scenarios [41]. Dynamic multipath scenarios have
been presented by Lentmaier [42]. In the latter work, it is evident how the particle
filter combines the non-Gaussian parameter probability density function of the dif-
ferent epochs and circumvents the linearization problem. A demonstration of the
practical use of nonlinear filters or a comparison with linear filters is not known to
the author.
A related problem is the transition between different discrete states in an esti-
mation filter. Most importantly, the number of multipath signals has to be estimated
correctly to set up the right signal model. Bayesian methods can be used for this
purpose [42].
Section 4.4 showed that categories 1 and 2 are equivalent if the correlator
values represent a sufficient statistics. Category 1 and 2 are coherent Kalman fil-
114 Signal Estimation
ters because the input data depends on the carrier phase, whereas a category-3
(noncoherent) Kalman filter may operate with code-phase and Doppler (and even-
tually with amplitude) discriminator values.
A coherent Kalman filters needs to generate predicted complex-valued ampli-
tudes, which are used to set up the Kalman filter design matrix of the current epoch.
By contrast, a noncoherent Kalman filter based on LSQ discriminators uses the com-
plex-valued amplitude estimate based on the current epoch’s samples to set up the
LSQ design matrix. From the theoretical point of view, this is the main difference
among category 1/2 and 3. The extent to which the predicted complex-valued ampli-
tude is more accurate than the estimated complex-valued amplitude depends primar-
ily on the signal characteristics (data/pilot) and dynamics (including clock jitter), the
duration of the coherent integration and on the availability of aiding (IMU) data.
A coherent Kalman filter is optimal (e.g., squaring-loss free) but difficult to
realize for low signal power [45]. Note that the squaring loss is irrelevant for high
signal power. For GNSS receivers, predicting the complex-valued amplitude—in
situations where carrier phase estimation is not possible—requires at least a cm-
accurate pseudorange prediction that needs a stable receiver oscillator (OCXO),
a navigation-grade IMU, and a stable propagation channel (e.g., no atmospheric
scintillations). Furthermore, predicting the carrier phase requires pilot signals or a
data-bit wipe-off functionality. A more practical approach to incorporate an equiv-
alent amount of aiding information in a noncoherent Kalman filter is to increase
the coherent integration time to macroscopic lengths (a few seconds) and to use
the aiding information within the discriminators only. Effectively, the aiding infor-
mation compensates for nonlinear line-of-sight dynamics and allows for using the
short-period signal model over longer time spans. Macroscopic long-integration
times virtually eliminate the squaring loss.
The use of the aiding data is simpler within the discriminators than within a
coherent Kalman filter [46]. Only the second-order line-of-sight dynamics (e.g., the
acceleration) need to be compensated over the integration interval, and each inter-
val is independent. By contrast, the aiding information for the Kalman filter needs
to be stable at zeroth order over the whole period of operation. Both methods—a
coherent Kalman filter or a long-coherent integration—predict the carrier phase of
the line-of-sight signal and thereby mitigate contributions from multipath signals,
which have a different carrier-phase behavior if the antenna is moving. This may
involve predetection multipath suppression or a synthetic antenna array [47].
Category-A Kalman filters are used for independent channel tracking and cat-
egory-B filters are used for vector tracking. A category B/1 Kalman filter would be
well-suited to realize an intrasystem interference-cancellation receiver. Finally, it
should be noted that Kalman filters of all categories allow carrier-phase estimation
(see Section 4.3.3.4).
The squaring loss introduced in Section 4.3.2 decreases the accuracy of high-rate
pseudorange LSQ estimates compared to the CRLB (considering the carrier phase as
a useful parameter). The squaring loss is caused by estimation errors of the complex-
4.6 Squaring Loss Revisited 115
valued signal amplitude, which affect the setup of the design matrix. The squaring
loss is intrinsically coupled to the underlying LSQ estimation scheme, which jointly
estimates the complex-valued signal amplitude together with the code phase and
Doppler. The complex-valued amplitude is treated as a nonrandom parameter.
By contrast, if the carrier phase is treated as a nuisance parameter, none of the
relevant bounds (e.g., the MCRLB or the ACRLB) is affected by a term that would
correspond to the squaring loss, as has been shown in Sections 4.3.2.5 and 4.3.8.
Therefore, further investigations shall be carried out in this section to clarify the
question of whether the squaring loss is a specific artifact of LSQ estimation or
if it is more fundamental. This is done in the following section by a mathemati-
cally exact treatment of the carrier phase as a nuisance parameter, which gives the
true CRLB (TCRLB). A numerical evaluation of the TCRLB later shows that it is
affected by the squaring loss, proving that the LSQ scheme is an optimal estimator
(i.e., a MVUE).
To evaluate the TCRLB under the assumption that the carrier phase is a nui-
sance parameter, a simplified signal model shall be used that assumes zero Doppler.
Based on (4.44), the following signal model shall be assumed
a
rµ (q, ξ ) = ac(t µ - τ )exp{-iϕ} q= , ξ = (ϕ) (4.175)
τ÷
where the code phase t and the real-valued amplitude a are considered as useful
parameters q to be estimated, and the carrier phase j is treated as a nuisance pa-
rameter ξ. For the carrier phase, we assume that it is uniformly distributed between
[0, 2π].
The probability density function of the signal samples is first expanded into
1 1 L 2
pq,ξ (s) = L
exp - sµ - rµ (q, ξ)
(2π ) 2 µ =1
(4.176)
L L
1 1 2 1 2
= L
exp - sµ - rµ(q, ξ ) + Re{aeiϕ + i δ B}
(2π ) 2 µ =1
2 µ =1
represent the magnitude and phase of the correlation value of the received samples
with the signal at baseband.
According to Section 4.2.2, the probability density function of the signal sam-
ples, parameterized only by the useful parameters, is given by averaging (4.176)
over all carrier phase values; that is,
116 Signal Estimation
2π
1
pq (s) = pq, ξ (s) = pq, ξ (s)dϕ
ξ 2π ϕ = 0
(4.178)
1 1 L 2 La2
= exp - s µ - + log(I0 (aB))
(2π )L 2 µ =1 2
Here, Im denotes the mth-order modified Bessel function of first kind, defined
as
2π
exp{imx + x cos(ϕ)}dϕ = 2π Im (x) (4.179)
ϕ =0
aI1(aB) B
log pq (s) = log(I0 (aB)) = (4.181)
τ τ I0 (aB) τ
B 1 L
=- (Sµ c(t µ - τ )Sν c (t ν - τ ) + S µ c (t µ - τ )Sν c(t ν - τ ))
τ 2B µ ,ν =1
(4.182)
L L
1
=- Re Sµ c(t µ - τ ) Sν c (t ν - τ )
B µ =1 ν =1
Overall, the TCRLB for the code phase under the assumption that the real-
valued amplitude is constant and that the carrier phase is a uniformly distributed
nuisance parameter is given as
-1
2
aI1(a P ) Re{PD}
TCRLB(τ ) = ÷ (4.185)
I0 (a P ) P ÷
N
In case the real-valued signal amplitude is jointly estimated with the code phase,
the two-dimensional Fisher information matrix has to be set up. Therefore, the
derivative with respect to the amplitude of the logarithm of (4.178) has to be com-
puted as
BI (aB)
log pq (S) = -aL + log(I0 (aB)) = -aL + 1 (4.186)
a a I0 (aB)
Then the Fisher information matrix is given as
2
aI1(a P ) Re{PD} P I1(a P ) aI1(a P ) Re{PD} ÷
÷ - aL ÷
I0 (a P ) P ÷ I0 (a P ) ÷ I0 (a P ) P ÷
Iτ , a = ÷
P I1(a P ) aI1(a P ) Re{PD} P I1(a P )
2 ÷
- aL ÷ - aL ÷ ÷
I0 (a P ) ÷ I0 (a P ) P I0 (a P ) ÷ ÷
N
(4.187)
The off-diagonal terms tend to average out due to the presence of the D-correlator,
whose distribution is symmetric around zero-code delay, whereas the P-correlator
values are centered around a. For large values of a, the approximation (4.180) can
be used, yielding
2
Re{PD} Re{PD} ÷
a ÷ ( P - aL)a
P ÷ P ÷
Iτ , a = ÷ (4.188)
a >>
Re{PD} 2 ÷
( P - aL)a ( P - aL) ÷
P
N
The Fisher information matrix is best evaluated using a Monte Carlo simula-
tion at either the signal level or the correlator level. A specific example will be per-
formed in the next section. The code-phase TCRLB can then be computed from the
first diagonal element of the inverse Fisher information matrix.
starting correlation point on the obtained estimates and single-iteration results are
compared to a converging iterative solution.
The simulation is based on the Gaussian double-pulse signal of Section 1.9.4.
CRLB: The code-phase CRLB (4.72), which is obtained by treating the car-
rier phase, code phase, Doppler, and amplitude as useful parameters to be
estimated.
TCRLB: The true code-phase CRLB—the first diagonal element of the in-
verse matrix of (4.187)—obtained by treating the carrier phase as a nuisance
parameter distributed uniformly in [0, 2π].
TCRLB, a = const.: The true code-phase CRLB (4.185) obtained by treating
the carrier phase as a nuisance parameter distributed uniformly in [0, 2π] and
assuming a known amplitude.
LSQ: The code-phase variance from an LSQ estimation (4.82). The correla-
tion points coincide with the true values.
All four procedures are based on an unbiased estimation scheme. The differ-
ences in the CRLBs originate from the different treatment of the individual param-
eters, which is summarized in Table 4.3. The TCRLB is the most realistic bound,
and the CRLB is of importance if the carrier phase is known from a priori or if the
carrier phase is estimated with high accuracy.
Figure 4.7 shows the different code-phase variances (actually the square root of
the variances) as a function of the signal power in a double logarithmic plot. The
LSQ variance and the two TCRLBs coincide very well and exhibit the squaring
loss below around 32 dBHz. The CRLB does not show the squaring loss and ef-
fectively underestimates the influence of the unknown carrier phase on code-phase
estimation.
Two important conclusions can be drawn from Figure 4.7 for the assumed
exemplary navigation signal. First, the CRLB is too weak for low signal power
levels. One of the TCRLBs should be used. The TCRLBs provide more realistic
values (although they are significantly more difficult to calculate). Second, the LSQ
estimate achieves the TCRLB and is thus an optimal estimator, even for low signal
4.8 Discussion
This chapter presented and contrasted different theoretical principles to obtain po-
sition estimates from a received navigation signal: direct position estimation of
Section 4.2.3, MLE or LSQ pseudorange estimation of Section 4.3, minimum mean-
squared error estimation of Section 4.5.1, or a Kalman filter operating on a suffi-
cient statistic of Section 4.5.2.
When comparing those theoretical principles to what is actually done in many
GNSS navigation receivers for high-rate pseudorange estimation, it is interesting
to note that normally none of these theoretical principles is directly applied. In-
stead, the receiver design is mostly based on an approximated single-iteration LSQ
approach and the first derivative of the baseband signal is approximated by an
early-minus-late replica difference. It is also common practice to determine mul-
tiple correlation values centered near the true values (e.g., a sufficient statistic) and
then to construct code-phase, Doppler, carrier-phase, and amplitude estimators that
are insensitive to certain effects, e.g., multipath, false-locks, carrier-phase loss, and
many more. All those engineering approximations considerably improve the overall
receiver performance, at the cost of an increased variance and the possible introduc-
tion of biases.
If, by contrast, truly minimum variance and unbiased estimates are desired, the
receiver should estimate high-rate pseudorange parameters as follows. First, the
correlation point should be within the linearity region. Second, the LSQ estimation
scheme of Section 4.3 should be used with a single iteration. The LSQ estimation
scheme itself is well-implementable for a GNSS SDR because the LSQ computational
processing demands are negligible compared to the signal correlation (at least for
the CPUs considered in chapter 3). The numerical evaluation of the TCRLB showed
that the LSQ discriminator is optimal. Ensuring that the correlation point follows
the true values requires a sophisticated receiver design, especially if the targeted
operation scenario of the receiver does not allow independent channel tracking (see
Section 4.3.3.1). Vector tracking, along with aiding information (e.g., IMU, stable
clock, map-matching) is useful. Furthermore, the number and the approximate de-
lays of possibly present multipath signals is valuable to overcome the singularity for
near-range multipath mentioned in Sections 4.3.6 and 4.3.7. Section 8.3 introduces
a multipath-estimating discriminator and gives practical implementation aspects.
If the relation between position parameters and pseudorange parameters is suf-
ficiently linear (as is the case for GNSS signals), the optimality property of the LSQ
pseudorange estimates transfers to the position estimates if all parameter correla-
tions are properly accounted for.
As long as no navigation problem with high dynamics and high signal power
is considered, the LSQ equations should not be iterated and the LSQ estimates
are based on one iteration for the chosen correlation point. The code phase and
Doppler of the correlation point typically derive from a predicted value based on
the previous estimates. By contrast, the complex-valued amplitude is irrelevant for
the correlation point, but is needed to set up the design matrix. A predicted com-
plex-valued amplitude from previous estimates gives a coherent estimation scheme;
estimating the complex-valued amplitude from the current epoch’s samples gives a
noncoherent scheme.
4.8 Discussion 125
If the correlation point cannot be expected to be within the linear region, the
proposed LSQ scheme produces suboptimal estimates. Suboptimal estimates will
also be produced if the number of multipath signals is incorrectly estimated. Both
situations should not occur for high signal-to-noise ratios. If they occur, Bayesian
techniques, such as (sequential) Monte Carlo methods, may still be optimal because
they do not require fulfilled linearity conditions. An alternative to Bayesian tech-
niques is the increase of the coherent integration time. The increased integration
time results in a synthetic antenna aperture, thereby already mitigating multipath
signals. The long integration increases the energy of the received signals and the
LSQ scheme might again be optimal. Pilot signals, short-term stable oscillators, a
MEMS IMU, or difference correlators may help to increase the integration time.
References
[1] Poor, H. V., An Introduction to Signal Detection and Estimation, New York: Springer,
1988.
[2] Zhou, G., and G. B. Giannakis, “Harmonics in Gaussian Multiplicative and Additive Noise:
Cramér–Rao Bounds,” IEEE Trans. Signal Processing, Vol. 43, 1995, pp. 1217–1231.
[3] Lehmann, E. L., Testing Statistical Hypothesis, 2nd ed., New York: Wiley, 1986.
[4] Porat, B., Digital Processing of Random Signals, Theory & Methods, Englewood Cliffs,
NJ: Prentice-Hall, 1994.
[5] D’Andrea, A. N., U. Mengali, and R. Reggiannini, “The Modified Cramér–Rao Bound and
Its Application to Synchronization Problems,” IEEE Trans. Commun., Vol. 42, 1994, pp.
1391–1399.
[6] Gini, F., R. Reggiannini, and U. Mengali, “The Modified Cramér–Rao Bound in Vector
Parameter Estimation,” IEEE Trans. Commun., Vol. 46, 1998, pp. 52–60.
[7] Moeneclaey, M., “On the True and the Modified Cramer-Rao Bounds for the Estimation of
a Scalar Parameter in the Presence of Nuisance Parameters,” IEEE Trans. Commun., Vol.
46, 1998, pp. 1536–1544.
[8] Closas, P., et al., “Bayesian Direct Position Estimation,” Proc. 21st International Techni-
cal Meeting of the Satellite Division of the Institute of Navigation (ION-GNNS) 2008,
Savannah, GA, September 16–19, 2008, pp. 183–190.
[9] NXP Software, “NXP SnapSpot GPS Technology and JOBO photoGPS Capture a Location
in an Instant,” http://www.software.nxp.com/?pageid=139, 2007.
[10] Brown, A., P. Brown, and J. Griesbach, “GeoZigBee: A Wireless GPS Wristwatch Track-
ing Solution,” Proc. 19th Int. Technical Meeting of the Satellite Division of the In-
stitute of Navigation (ION-GNNS) 2006, Fort Worth, TX, September 26–29, 2006,
pp. 2883–2888.
[11] Blewitt, G., “GPS Data Processing Methodology: From Theory to Applications,” in GPS for
Geodesy, pp. 233–270, Teunissen, P. J. G., and A. Kleusberg, (eds.), New York: Springer,
1998.
[12] Biberger, R., Error Modelling of Pseudolite Signal Reception on Conducting Aircraft Sur-
faces, University FAF Munich, Werner-Heisenberg-Weg 39, D-85577 Neubiberg, http://
www.unibw.de/unibib/digibib/ediss/bauv, 2006.
[13] Varanasi, M. K., and B. Aazhang, “Multistage Detection in Asynchronous Code-Division
Multiple-Access Communications,” IEEE Trans. Commun., Vol. 38, 1990, pp. 509–519.
[14] van Dierendonck, A. J., P. Fenton, and T. Ford, “Theory and Performance of Narrow Cor-
relator Spacing in a GPS Receiver,” NAVIGATION, Journal of The Institute of Naviga-
tion, Vol. 39, No. 3, 1992, pp. 265–283.
126 Signal Estimation
[15] Misra, P., and P. Enge, Global Positioning System: Signals, Measurements, and Perfor-
mance, 2nd ed., Lincoln: Ganga-Jamuna Press, 2006.
[16] Pany, T., and B. Eissfeller, “Code and Phase Tracking of Generic PRN Signals with Sub-
Nyquist Sample Rates,” NAVIGATION, Journal of The Institute of Navigation, Vol. 51,
No. 2, 2004, pp. 143–159.
[17] Spilker, J. J., Jr., “GPS Signal Structure and Theoretical Performance,” in Global Position-
ing System: Theory and Applications, Vol. I, pp. 57–120, Parkinson, B. W., and J. J. Spilker,
(eds.), Washington, D.C.: American Institute of Aeronautics and Astronautics Inc., 1996.
[18] Rife, D., and R. Boorstyn, “Single Tone Parameter Estimation from Discrete-Time Obser-
vations,” IEEE Trans. Information Theory, Vol. 20, No. 5, 1974, pp. 591–598.
[19] Pany, T., and B. Eissfeller, “Use of a Vector Delay Lock Loop Receiver for GNSS Signal
Power Analysis in Bad Signal Conditions,” PLANS 2006, IEEE/ION Position, Location
and Navigation Symposium, San Diego, CA, April 25–27, 2006, pp. 893–903.
[20] Kaplan, E. D., and C. J. Hegarty, (eds.), Understanding GPS: Principles and Applications,
2nd ed., Norwood, MA: Artech House, 2006.
[21] Thomas, J. B. Jr., inventor. California Institute of Technology, assignee, “Digital Signal
Processor and Processing Method for GPS Receivers,” U.S. Patent No. 4821294, 1989.
[22] Jaffe, R., and E. Rechtin, “Design and Performance of Phase-lock Circuits Capable of Near-
Optimum Performance over a Wide Range of Input Signal and Noise Levels,” IEEE Trans.
Information Theory, Vol. 1, 1955, pp. 66–76.
[23] van Dierendonck, A. J., “GPS Receivers,” in Global Positioning System: Theory and Ap-
plications, volume I, pp. 329–407, Parkinson, B. W., and J. J. Spilker, (eds.), Washington,
D.C.: American Institute of Aeronautics and Astronautics Inc., 1996.
[24] Tsui, J. B. Y., Fundamentals of Global Positioning System Receivers: A Software Approach,
2nd ed., New York: Wiley, 2005.
[25] Eissfeller, B., Schriftenreihe der Universität der Bundeswehr (55): Ein dynamisches Fehler-
modell für GPS Autokorrelationsempfänger. University of Federal Armed Forces Munich,
Werner-Heisenberg-Weg 39, D-85577 Neubiberg, 1997.
[26] Kazemi, P. L., “Optimum Digital Filters for GNSS Tracking Loops,” Proc. 21st Int. Tech-
nical Meeting of the Satellite Division of the Institute of Navigation (ION-GNNS) 2008,
Savannah, GA, September 16–19, 2008, pp. 2304–2313.
[27] Won, J. H., T. Pany, and B. Eissfeller, “Design of a unified MLE tracking for GPS/Gali-
leo software receivers,” Proc. 19th Int. Technical Meeting of the Satellite Division of the
Institute of Navigation (ION-GNNS) 2006, Fort Worth, TX, September 26–29, 2006,
pp. 2396–2406.
[28] Nunes, F. D., F. M. G. Sousa, and J. M. N. Leitao, “BOC/MBOC Multicorrelator Receiver
with Least-Squares Multipath Mitigation Technique,” Proc. 21st Int. Technical Meeting of
the Satellite Division of the Institute of Navigation (ION-GNNS) 2008, Savannah, GA,
September 16–19, 2008, pp. 652–662.
[29] Won, J. H., T. Pany, and B. Eissfeller, “Implementation, Verification and Test Results of a
MLE-Based F-Correlator Method for Multi-Frequency GNSS Signal Tracking,” Proc. 20th
Int. Technical Meeting of the Satellite Division of the Institute of Navigation (ION-GNNS)
2007, Fort Worth, TX, September 25–28, 2007, pp. 2237–2249.
[30] Spilker, J. J. Jr., “Fundamentals of Signal Tracking Theory,” in Global Positioning System:
Theory and Applications, Vol. I, pp. 245–328, Parkinson, B. W., and J. J. Spilker, (eds.),
Washington, D.C.: American Institute of Aeronautics and Astronautics Inc., 1996.
[31] Alban, S., D. M. Akos, and S. M. Rock, “Performance Analysis and Architectures for INS-
Aided GPS Tracking Loops,” Proc. Institute of Navigation National Technical Meeting
(ION-NTM) 2003, San Diego, CA, January 22–24, 2003, pp. 611–622.
[32] Landis, D., et al., “A Deep Integration Estimator for Urban Ground Navigation,” PLANS
2006, IEEE/ION Position, Location and Navigation Symposium, San Diego, CA, April
25–27, 2006, pp. 927–932.
4.8 Discussion 127
Signal Detection
pd = F(S) S
S : signal parameters comply to H1 (5.1)
pfa = F(S) S
S : signal parameters comply to H0 (5.2)
Different detection schemes can be chosen that are either optimized according
to certain criteria described in Section 5.1.1 or follow an engineering approach
based on the ML estimation of Section 5.7. The latter approach also gives esti-
mated signal parameter values that can be used as starting values for an estimation
algorithm.
The integral is performed over all admissible values for x and the a priori dis-
tribution p(x) of the parameters x has to be known. The parameters can either be
position parameters or high-rate pseudorange parameters, as discussed in Section
5.2. In the simple hypothesis-testing approach, we test against a distribution that is
valid if no signal is present (hypothesis H0). For both hypotheses, we assume that
the noise parameters (e.g., its variance) are known; then, the sample distribution
under H0 does not depend on any parameter. The simple hypothesis approach is
summarized as
H1 pH1 (S)
(5.4)
H0 pH0 (S)
pH1 (s)
L(s) = >γ (5.5)
pH0 (s)
5.1 Detection Principles 131
( )
pH1 S x px (S) (5.6)
L(s) =
(
pH1 s xˆ H1 , ξˆ H1 )>γ
( )
pH0 s ξˆ H0
(5.8)
Here, the parameters have been divided into x and ξ, where x refers to param-
eters of the signal to be detected and ξ refers to parameters related to additional sig-
nals or to noise. The symbol ^ indicates that the ML estimates are used to evaluate
the sample distribution (see Section 4.2.3). The ML estimates for ξ obtained under
the assumption H0 or H1 are, in general, different. This detection approach also
5.3 Preprocessing 133
provides the estimated parameter values x̂ that can, in turn, be used for starting a
signal estimation algorithm.
Similar to Sections 4.2.3 and 4.2.4, the detection problem can be formulated in
terms of generalized position parameters or in terms of generalized pseudorange
parameters. In the first case, the navigation signal is defined on the position do
main, whereas, in the latter case, the navigation signal is defined on the pseudo
range domain.
5.3 Preprocessing
The first two points are more technical but may introduce losses in the effective
signal power due to implementation constraints. Those effects are not discussed
here. By contrast, filtering is of special importance because the immense computa-
tional load can be drastically reduced if the signal bandwidth can be reduced. This
allows for using a lower sample rate. For example, a GNSS signal acquisition algo-
rithm exists that makes use of only a fraction of the full signal bandwidth. Typically
this occurs when the spectral main lobe is used to acquire a BPSK signal. The BOC
side-lobe acquisition algorithm effectively uses only one side lobe of a BOC signal
and discards the second lobe [5]. For a BOC(n,n) signal, the sample rate is reduced
by a factor of two, the signal power loss is around 3 dB, and the working correla-
tion function has a simple triangular shape compared to the full BOC autocorrela-
tion function. For direct GPS P(Y) code acquisition, it might be convenient to work
with only a fraction m of the bandwidth of the main lobe (the bandwidth being
centered around the carrier frequency). The work sample rate can then be reduced
by a factor of m and the signal power is also reduced by a factor of m. This signal
power loss can, however, be compensated by increasing the coherent integration
time by m. The same detection sensitivity is achieved, but during a signal acquisition
step a m-times-larger portion of all code-phase values is tested [6–8].
In Section 5.4, these preprocessing steps will not be mentioned explicitly, but
it should be noted that the considered navigation signal might have been subject to
one or more of the above-mentioned preprocessing steps.
The term clairvoyant detector was introduced in Chapter 6 of Kay’s book and refers
to a true composite hypothesis-testing problem [2]. The detector makes use of the
otherwise unknown signal parameter values. Generally, the clairvoyant detector is
of a hypothetical nature and does not exist in practice.
The clairvoyant detector has theoretical importance and may serve as a refer-
ence detector in the same sense as the Cramér–Rao lower bound defines a reference
for several estimation strategies. In addition, the clairvoyant detector can be used
if the unknown signal parameters are available from an external source. This is of
particular importance if vector tracking, as described in Section 4.3.3.3, is used.
To give an example of a clairvoyant detector, we assume that a single navigation
signal is present (or not) and the signal parameters (see Section 1.8) as well as the
Gaussian noise distribution parameters (see Section 1.7) are assumed to be known.
The sample distribution under H1 is
1 1 L
( )
pH1 s a , τ , ω ,ϕ =
(2π )L
exp -
2
s µ - a c(t µ - τ )exp{iω t µ - iϕ}
2
(5.9)
µ =1
5.4 Clairvoyant Detector for Uniformly Distributed Phase 135
2π
1 1 1 L
( )
pH1 s a , τ , ω =
(2π )L 2π
exp -
2 µ =1
2
sµ - a c(tµ - τ )exp{ iω t µ - iϕ} d ϕ
ϕ =0
2π
1 1 L 2 a2 L
2
= exp - s µ - c(t µ - τ )
(2π )L +1 ϕ = 0 2 µ =1 2 µ =1
L
+a Re{ sµ c(tµ - τ )exp{iω t µ - iϕ}} dϕ
µ =1
(5.10)
1 1 L 2 La 2
= exp - s µ -
(2π )L +1 2 µ =1 2
2π L
exp a Re{ sµ c(tµ - τ )exp{iω t µ - iϕ}} dϕ
ϕ =0 µ =1
1 L La 2
=
1
(2π )L
exp -
2 µ =1
2
sµ -
2
I0 a P(τ ,ω ) L ( )
with the definition for the correlator value P(τ,ω) as
L
1
P(τ , ω) = sµ c(tµ - τ )exp{iω tµ } (5.11)
L µ =1
1 L La 2
p (s)
exp -
2 µ =1
2
sµ -
2
I0 a P(τ , ω) L( )
L(s) = H1 =
pH0 (s) 1 L 2
exp - sµ (5.12)
2 µ =1
La 2
= exp -
2
(
I0 a P(τ , ω) L )
Because a > 0 and I0 (the modified-Bessel function of first kind and order zero)
is a monotone increasing function, the clairvoyant detector decides H1 if
2
P(τ , ω ) > γ (5.13)
136 Signal Detection
1 L (5.14)
= N µ Nυ c(tµ - τ )c (tυ - τ )exp{iω (t µ - t υ )}
L µ ,υ =1
N
2 L 2
= c(t µ - τ ) = 2
L µ =1
Here, we assume that the assumption of Section 1.8.1 holds true. According
to Appendix A.4.6, |P(τ,ω)|2 follows a central chi-squared distribution with two
degrees of freedom and the false alarm rate is given as
( )
-γ
2
pfa = P P(τ , ω ) > γ | H0 = Qχ 2 ;2 (γ ) = e 2
(5.15)
Under hypothesis H1, P(τ,ω) retains the same variance but the squared magni-
tude of its mean value equals La 2 because
2
L
2 1
H1 : P(τ , ω) = (a c (t µ - τ )exp{-iω tµ - iϕ} + Nµ )c(tµ - τ)exp{iω tµ }
L µ =1
2
L
1 (5.16)
= (a c (t µ - τ )c(t µ - τ)exp{-iϕ } + N µ c(tµ - τ )exp{iω tµ })
L µ =1
2
L
1
= La exp{-iϕ } + N µ c(t µ - τ )exp{iω tµ }
L µ =1
( 2
)
pd = P P(τ , ω ) > γ | H1 = Qχ 2 ;2,La 2 (γ ) = Q
χ 2 ;2,
2LC / N0 (γ )
fs
(5.17)
= Qχ 2 ;2,2TcohC / N0 (γ ) = Qχ 2 ;2,2TcohC / N0 (-2 log pfa )
5.5 Energy Detector 137
Here, we used the relationship between the signal amplitude a and the signal-
to-noise ratio C / N0 from Section 1.8.1.
The energy detector uses the sum of squared signal samples to determine if a signal
is present. The averaged sum of the squared signal samples is an estimate of the
received signal energy. The energy detector is eventually the simplest detector that
can be used to detect the presence of a navigation signal. The energy detector’s
performance is comparably low. The design of the energy detector is not related
to any optimality criterion and it belongs to the class of simple hypothesis-testing
detectors.
The energy detector decides for H1 if
L
2
E= sµ >γ (5.18)
µ =1
for a suitable chosen threshold γ. Here, the Neyman–Pearson principle will be em-
ployed to determine the threshold.
Under the hypothesis H0, the energy E is the sum of L complex Gaussian
samples, whose real and imaginary components are each of variance one and zero
mean,
L
2
H0 : E = nµ (5.19)
µ=1
γ - 2L
pfa = P(E > γ | H0 ) = Qχ 2 ;2L (γ ) Q ÷ (5.20)
4L
Equation (5.20) indicates that the chi-squared probability density function can
be well approximated by the right-tail probability function of the normal distribu-
tion because of the usually large number of involved samples.
Under the hypothesis H1, the energy E can be written as
L
2
H1 : E = a c(t µ - τ )exp{iω tµ - iϕ} + nµ
µ =1
L
= (Re{a c(t µ - τ )exp{iω tµ - iϕ}} + Re{nµ })2 + (5.21)
µ =1
L
+ (Im{a c(t µ - τ )exp{iω tµ - iϕ}} + Im{nµ })2
µ =1
138 Signal Detection
The detection probability is given by (5.23) and can be well approximated by the
right-tail probability function of the normal distribution due to the usually large
number of involved samples
γ - 2L - 2TcohC / N0
pd = P(E > γ | H1) = Qχ 2 ;2L,2TcohC / N0 (γ ) Q (5.23)
4L + 8TcohC / N0 ÷
In this section, a Bayesian detector will be derived; this requires the a priori distribu-
tions of the signal parameters. Particular assumptions will be made that allow for
the derivation of a partly closed expression for the detector. The Bayesian detector
integrates all parameters out and can be considered a simple hypothesis-testing
detector. It will be shown that, for a uniform code-phase and Doppler a priori dis-
tribution, the Bayesian detector equals the generalized likelihood-ratio detector of
Section 5.7.
We assume that the carrier phase is uniformly distributed between 0 and 2π.
The sample probability density function under H1 has already been derived for this
case in Section 5.4 and is given by (5.10). The Bayesian detector is based on the
likelihood ratio (5.12), but in addition to the carrier phase, the real-valued ampli-
tude a , the code phase τ, and the Doppler ω also have to be integrated out. This
integration can be directly performed with the likelihood ratio because the sample
distribution under H0 is independent of the signal parameters. The likelihood ratio
of the Bayesian detector is given as
pH1 (s)
L(s) =
pH0 (s)
(5.24)
La 2
= p(a , τ , ω)exp -
2
(
I0 a P(τ , ω) L da dτ dω )
a ,τ , ω
The detector is based on the same correlator value P(τ,ω) as in Section 5.4
L
1
P(τ , ω) = sµ c(tµ - τ )exp{iω tµ} (5.25)
L µ =1
5.6 Bayesian Detector 139
The real-valued amplitude a , the code phase τ, and the Doppler ω are assumed
to be statistically independent,
To further evaluate (5.24), specific expressions for the a priori distributions will
be given. The distributions aim at a detector for a multipath signal that is expected
to appear within a certain code phase and Doppler range. The a priori distribution
for code phase p(τ) and the Doppler p(ω) within the admissible range shall be left
unspecified. The multipath signal amplitude shall be Rayleigh distributed. The aver-
age power of the multipath signal is assumed to be known.
For an average received multipath signal power of 2s 2a the probability density
function of a Rayleigh fading signal is, according to Lee [9], given as
a a2
p(a ) = 2
exp - 2 (5.27)
2σ a 2σ a
Using a tool for analytical mathematics, integration of (5.24) can be carried out
explicitly, yielding
2
La 2 P(τ , ω) Lσ a2
p(a )exp - (
I0 a P(τ , ω) L da =
1
) exp (5.28)
a =0
2 1 + Lσ a2 2 1 + Lσ a2 ( )
The integrations with respect to the code phase or Doppler can only be done
numerically. The obtained likelihood ratio is given as
2
1 P(τ , ω) Lσ a2
L(s) = p(τ )p(ω)exp dτ dω (5.29)
1 + Lσ a2 (
2 1 + Lσ a2 )
The Bayesian detector shall be denoted by B and decides for H1 if B exceeds a
certain threshold
2
P(τ ,ω ) Lσ a2
B = p(τ )p(ω )exp dτ dω > γ (5.30)
(
2 1+ Lσ a2 )
According to Section 3.7 of Kay’s book, the value for the threshold is given by [2]
(
γ = 2π 1 + Lσ a2 ) ((CC10 - C00 )p(H0 )
01 - C11)p(H1)
(5.31)
where Cij is the cost if Hi is chosen but Hj is true. The symbol p(H0) denotes the a
priori probability that H0 is true and p(H1) denotes the a priori probability that H1
is true. Overall, the Bayesian detector minimizes the cost function
1
R= ( )
Cij p Hi H j p(Hj ) (5.32)
i, j =0
140 Signal Detection
where p(Hi | Hj) is the probability that Hi is detected and Hj is true. The Bayesian
detector is an optimum detector with respect to this criterion. Note that the
Neyman-Pearson expression (5.15) was based on pfa and not on any cost function.
If we assume that C10 = C01 = 1 and C00 = C11 = 0, the corresponding Bayesian
detector minimizes the sum of missed detections plus false detections
( ) ( )
R = p H0 H1 p(H1) + p H1 H0 p(H0 ) = (1 - pd )p(H1) + pfa p(H0 ) (5.33)
2
P(τ , ω)
B = p(τ )p(ω)exp dτ dω (5.34)
2
Lσ a �1 2
that is independent of the mean received multipath signal power. The Bayesian
detector uses contributions from all admissible correlation values, combines them
properly, and compares the result against a threshold. By contrast, generalized
likelihood-ratio detectors compare only the maximum correlation value against a
threshold. As a consequence, the Bayesian scheme does not provide any code-phase
or Doppler estimates.
For a uniform distribution of code phase and Doppler, the Bayesian detector
and a generalized likelihood detector are equivalent because the functional depen-
dency of P as a function of Doppler and code phase is fixed. Only the value of the
peak and its position in the code-phase/Doppler plane varies. Therefore, being given
the value of the peak of P allows the calculation of B, as long as the whole support
of P is located within the admissible code-phase and Doppler values. This depen-
dency can be expressed as
2
B = B max P(τ , ω) ÷ (5.35)
τ ,ω
1 1 L
(
pH1 s a1,τ ,ω = ) (2π )L
exp -
2 µ =1
sµ - ac(tµ - τ )exp{iω t µ}
2
(5.36)
The complex signal amplitude maximum (5.36) has already been obtained in
(4.63) and is given as
1 L
aˆ = c(t µ - τ )exp{-iω tµ }sµ (5.37)
L µ =1
2
1 1 L L aˆ L
(
pH1 s aˆ ,τ , ω =) (2π )L
exp -
2 µ =1
s µ
2
-
2
+ Re ˆ
a sµc(tµ - τ )exp{iω t µ }
µ =1
2
1 1 L 2 L aˆ
= L
exp - sµ - + Re{Laa
ˆ ˆ} (5.39)
(2π ) 2 µ =1 2
2 2
1 1 L 2 L aˆ 1 1 L 2 P(τ , ω )
= L
exp - sµ + = L
exp - sµ +
(2π ) 2 µ =1 2 (2π) 2 µ =1 2
2
τˆ ,ωˆ = arg max P(τ , ω) (5.40)
τ ,ω
(
pH1 s aˆ, τˆ , ωˆ ) = exp P(τˆ , ωˆ )
2
(5.41)
pH0 (s) 2
142 Signal Detection
2
P(τˆ , ωˆ ) > γ (5.42)
is fulfilled. The detector is basically identical to the clairvoyant detector, apart from the
difference that the clairvoyant detector uses known code-phase and Doppler values.
1 1
( )
pH1 s a , τ , ω =
(2π )2L (2π)2
L
2
sµ - a c( tµ - τ )exp{iω t µ - iϕ 1} + ÷
2π
1 µ =1 ÷
exp - ÷ dϕ 1dϕ 2
2 (5.43)
2÷
2L
ϕ1,ϕ 2 = 0
+ sµ - a c( tµ - τ )exp{iω t µ - iϕ 2 } ÷
µ = L +1
1 2L
=
1
(2π )2L
exp -
2 µ =1
2
sµ - La 2
( ) (
I0 a P1(τ , ω) L I0 a P2(τ , ω) L )
where a procedure identical to that in Section 5.4 has been used to integrate out the
carrier phases.
5.7 Generalized Likelihood-Ratio Detector 143
The likelihood ratio is evaluated using the ML estimates and the detector de-
cides for H1 if
pH1 (s aˆ , τˆ , ωˆ )
pH0 (s)
{ } ( ) (
= exp -Laˆ (τ ,ω )2 I0 aˆ P1(τˆ ,ωˆ ) L I0 aˆ P2(τˆ , ωˆ ) L > γ) (5.47)
is fulfilled. The detector (5.47) compares to the single coherent integrator of Section
5.7.1 or to the noncoherent detector of the next section. It is based on the same cor-
relator values but involves nonlinear operations to combine both correlator values
to form a test statistic.
where C(a ) and Q(a ) are functions of the amplitude and T(s) and h(s) are func-
tions of the signal samples [4]. The function Q must be strictly monotone. The
function T(s) acts as a sufficient statistic, as described in Section 4.4.1. The uniform
most-powerful test is then given by testing T(s) against a threshold γ. Note that the
144 Signal Detection
1 1 2L 2 2 1 2 2 2 (5.49)
2L
exp - sµ - La + La P1(τ , ω) + P2 (τ, ω)
(2π ) 2 µ =1 4
The sum of the squared correlator values is formed similar to Section 5.7.2.3
and is compared against a threshold γ.
For an infinitely large signal amplitude, the asymptotic form of the modified Bessel
function
1
I0 (x) exp{x} (5.51)
x� 2π x
1
exp -
1 2L 2
sµ - La 2
exp { La ( P (τ ,ω ) + P (τ ,ω) )}
1 2
(5.52)
2L 2 µ =1
(2π ) 2π a L P1(τ , ω) P2 (τ , ω )
C(a ) =
1
(2π )2L
{
exp -La 2
} 2π1a
Q(a ) = La
(5.53)
T (s) = P1(τ ,ω ) + P2 (τ ,ω )
1 2L 2 1
h(s) = exp - sµ
2 µ =1 L P1(τ ,ω ) P2 (τ , ω )
and a uniform most-powerful test given by T(s) tests the sum of the magnitudes of
the correlator values against a threshold γ. The test is written as
2 1
La P1,2 (τ , ω) = La 1+
N 4Tcoh (C / N0 )1,2 ÷
(5.55)
1
= 2Tcoh (C / N0 )1,2 1+
4Tcoh (C / N0 )1,2 ÷
whose minimum value equals ½ for a vanishing signal power. For this value,
the Bessel function evaluates to log I0(0.5) = 0.0615 and the approximation used
to obtain (5.49) evaluates to ¼ 0.52 = 0.0625. Thus, for low signal power values,
(5.49) seems to be a reasonable approximation.
The calculation can be loosely summarized as “if small signal amplitudes are
expected, the squared correlators shall be added.” If large signal amplitudes are ex-
pected, the absolute values of the correlators are added to obtain (for the respective
range of amplitude values) a uniform most-powerful test.
A consequence of this investigation is that, for an unknown signal amplitude
(plus the other assumptions made in Section 5.7.2.1), no uniform most-powerful
detector exists, because without a priori knowledge of the signal amplitude we do
not know how to add the correlator values for threshold comparison. It should be
mentioned that, for the case of a single signal segment, a uniform most-powerful
detector exists: the clairvoyant detector of Section 5.4.
1 1 L 2
pH1 (s a1, a2 , τ , ω) = 2L
exp - sµ - a1c(tµ - τ )exp{iω tµ }
(2π ) 2 µ =1
(5.56)
1 2L 2
exp - sµ - a2c(t µ - τ )exp{iω t µ }
2 µ = L +1
The complex signal amplitudes maximizing (5.56) are for a fixed code-phase
and Doppler value completely independent of each other and are obtained in anal-
ogy to (4.63) as
1 L
aˆ1 = c(t µ - τ )exp{-iω tµ }sµ
L µ =1
(5.57)
1 2L
aˆ 2 = c(t µ - τ )exp{-iω tµ }sµ
L µ = L +1
146 Signal Detection
The amplitude estimates are related to the correlator definitions of (5.44) via
2
2 Pk (τ , ω)
aˆ k = k = 1, 2 (5.58)
L
Similar to Section 5.7.1, the probability density function using the ML signal
amplitude estimates is obtained as
1 1 L 2 1 2 1 2
pH1 (s aˆ1, aˆ 2 , τ , ω) = 2L
exp - sµ - P1(τ , ω) - P2 (τ , ω) (5.59)
(2π ) 2 µ =1 2 2
and the ML code-phase and Doppler estimates are derived by maximizing the fol-
lowing expression
2 2
τˆ , ωˆ = arg max P1(τ , ω) + P2 (τ , ω) (5.60)
τ ,ω
pH1 (s aˆ1 , aˆ 2 , τˆ , ωˆ ) 1 2 1 2
= exp - P1(τˆ , ωˆ ) - P2 (τˆ , ωˆ ) (5.61)
pH0 (s) 2 2
and the detector decides for H1 if the sum of the squared correlator values exceeds
a properly chosen threshold; that is
2 2
P1(τˆ , ωˆ ) + P2 (τˆ , ωˆ ) > γ (5.62)
For completeness, the false alarm probability and the detection probability
shall be evaluated under the assumption that the true code phase and Doppler are
known. The evaluation is in line with the derivation for the clairvoyant detector of
Section 5.4 but the number of involved random variables is increased from 2 to 2ν.
The false alarm probability is given as
Under hypothesis H1, and using the definitions in Section 2.2.3 of Kay’s book
summarized in Appendix A.4.6, S belong to the noncentral chi-squared distribution
with 2ν degrees of freedom and a noncentrality parameter νLa′ 2. Therefore, the
probability of detection is
Both formulas can be found in many textbooks on GNSS receivers such as that of
van Dierendonck [10].
It should also be mentioned that the assumption of a constant code phase for all
signal segments needs to be modified if a very long overall signal time span is con-
sidered. In fact, the code phase may drift among the different segments and this has
to be accounted for in the noncoherent summation of (5.63). However, the drift is
uniquely determined by the Doppler and therefore, during Doppler estimation, the
drift is also estimated (only if code and carrier are generated coherently). Inclusion
of the code-phase drift is thus only of a technical nature and does not change the
theoretical performance of the detector.
L(s) =
(
pH1 s a1,τ 1, ω 1, a2, τ 2, ω 2 )
(5.66)
(
pH0 s a2 , τ 2 ,ω 2 )
where the sample probability density function under H1 is evaluated as
(
pH1 s a1,τ 1,ω 1, a2 , τ 2 ,ω 2 )
1 1 L 2
= L
exp - sµ - a1c1(tµ - τ 1)exp{iω 1t µ } - a2c2 ( tµ - τ 2 )exp{iω 2t µ }
(2π ) 2 µ =1
=
1
(2π )L
exp -
1 L
2 µ =1
2
sµ -
L
2
2
a1 + a2
2
( ) (5.67)
L L
exp Re{a1 sµ c1(tµ - τ 1)exp{iω 1t µ}} + Re{a2 sµc2 (t µ - τ 2 )exp{iω 2t µ}}
µ =1 µ =1
L
exp Re{a1a2c1(t µ - τ 1)c2 ( tµ - τ 2 )exp{i(ω 1 - ω 2 )t µ }}
µ =1
2 L
pH1 (s a1,τ 1,ω 1, a2 ,τ 2 ,ω 2 ) L a1
= exp - + Re{a1 sµ c1( tµ - τ 1)exp{iω1t µ}}
pH0 (s a2 ,τ 2 ,ω 2 ) 2 µ =1
(5.68)
L
exp {
Re a1a2c1(tµ - τ 1)c2 ( tµ - τ 2 )exp{i(ω 1 - ω 2 )t µ} }
µ =1
pH1 (s a1, τ 1 , ω1 , a2 ,τ 2 ,ω 2 )
pH0 (s a2 , τ 2 ,ω 2 )
2
(5.69)
L
L a1
= exp - + Re{a1c1(t µ - τ 1)exp{iω 1t µ }(s µ - a2c2 (t µ - τ 2 )exp{iω 2t µ })}
2 µ =1
A replica signal “2” is subtracted from the received signal samples and the dif-
ference signal is used to determine the possible presence of signal “1.” Following
the discussion of a single coherent integration of Section 5.7.1, an interference-
corrected correlator is defined as
L
1
P(τ 1,ω 1; τ 2 ,ω 2 ) =
L
(sµ - a2c2 (tµ - τ 2 )exp{iω 2t µ })c1(tµ - τ1)exp{iω1t µ } (5.70)
µ =1
The ML estimates for the code phase and Doppler of signal “1” are obtained
via maximizing
2
τˆ1, ωˆ 1 = arg max P(τ1, ω 1; τ 2 , ω 2 ) (5.71)
τ1,ω1
The presence of signal “1” is declared if the correlator exceeds a threshold like
2
P(τ 1, ω1;τ 2 , ω 2 ) > γ (5.72)
Provided that the signal “2” parameter estimates are accurate, this detector
achieves the same performance as the single coherent detector of Section 5.7.1.
where the c1(t) is used to transmit the pilot signal and c2(t) is used to transmit the
data signal. Here, a complex signal amplitude representation is used. The relative
power of the data component is given by α2 and the relative power of the pilot com-
ponent is given by β 2. The sum of the relative signal powers is unity
α2 + β2 = 1 (5.74)
which can be easily fulfilled by the system designers as the relative code phase and
Doppler between data and pilot is constant. The data content is coded into the phase
function ψ (d) that assigns a certain phase to the data signal as a function of the
broadcast data bit d, which is either +1 or 0. For a QPSK signal, a phase function
i d =1
ψ (d) = (5.76)
-i d = 0
is typically used. For the interplex modulation (as, for example, used for the Galileo OS
on E1) or for time-multiplexed signals (like the L2 civil signal), the phase function
1 d =1
ψ (d) = (5.77)
-1 d = 0
can be used.
150 Signal Detection
pH1 (s a, τ , ω)
1
1 1 L 2
= exp - sµ - a(β c1 (tµ - τ ) + αψ (d)c2 (tµ - τ ))exp{iω t µ}
2(2π )L d =0
2 µ =1
1
1 1 L 2 1 2
= exp - sµ - L a
2(2π )L d =0
2 µ =1 2
L
+ Re {asµ (βc1 (tµ - τ ) + α ψ (d )c2 (tµ - τ ))exp{iω t µ }}
µ =1
1 1 L 2 1 2
L
= exp - s µ - L a + Re{aβ sµ c1 (tµ - τ )exp{iω t µ}}
2(2π )L 2 µ =1 2 µ =1 (5.78)
1 L
exp Re{α aψ (d)sµ c2 ( tµ - τ )exp{iω t µ}}
d =0 µ =1
1 1 L 2 1 2
L
= exp - s µ - L a + Re{aβ sµ c1 (tµ - τ )exp{iω t µ }}
2(2π )L 2 µ =1 2 µ =1
L
exp Re{α aψ (0)sµ c2 (tµ - τ )exp{iω t µ }}
µ =1
L
+ exp Re{aαψ (1)sµ c2 (tµ - τ )exp{iω t µ }} ÷
µ =1
÷
For the QPSK phase function, this expression can be further simplified to
5.7 Generalized Likelihood-Ratio Detector 151
pH1 (s a, τ , ω ) =
1 1 L 1 L
Re {aβ sµ c1 (tµ - τ )exp{iω t µ }}
2 2
= L
exp - sµ - L a +
2(2π ) 2 µ =1 2 µ=1
L
2 cosh Re {iaα sµ c2 (t µ - τ )exp{iω t µ }}÷ (5.79)
µ =1
1 L
=
1
(2π )L
exp -
2 µ =1
2 1 2
sµ - L a + Re a Lβ P1(τ , ω)
2
{ } cosh(Re{ia }
Lα P2 (τ , ω ) )
and it is convenient to define two correlators, each for the data and for the pilot
signal as
L
1
P1(τ , ω) = sµ c1(tµ - τ )exp{iω tµ }
L µ =1
(5.80)
L
1
P2 (τ , ω) = sµ c2 (t µ - τ )exp{iω t µ }
L µ =1
d 1 β L α L
= - Laa + (aP1(τ , ω ) + aP1(τ , ω )) + log cosh (iaP2 (τ , ω ) - iaP2 (τ , ω ))÷ ÷
da 2 2 2
1
= - La +
β L
P1(τ , ω) -
( {
sinh Re ia Lα P2 (τ , ω) })iα L P (τ ,ω) = 0 !
(5.82)
( { }) 2
2
2 2 cosh Re ia Lα P2 (τ , ω )
( ( {
La = L βP1(τ ,ω ) - iα P2 (τ , ω )tanh Re ia L α P2 (τ , ω) })) (5.83)
152 Signal Detection
The ML estimate for the complex signal amplitude is formally a function of the
two correlator values being expressed as
An approximate solution for (5.83) is obtained by iteration. For the first itera-
tion we set P2 = 0 and obtain a first amplitude estimate as
β
a�0 (τ , ω) = P1(τ , ω) (5.85)
L
Inserting the first iteration into the right hand side of (5.83) yields a refined ap-
proximation expressed as
a�1(τ , ω ) =
1
L
( ( {
βP1(τ , ω ) - iαP2 (τ , ω)tanh Re ia�0 (τ , ω ) L α P2 (τ , ω) }))
(5.86)
=
1
L
( ( {
βP1(τ , ω ) - iαP2 (τ , ω )tanh Re αβ iP1(τ , ω )P2 (τ ,ω ) }))
The equation can be further iterated and converges well. In the following ex-
ample, we stop after the second iteration and use this value as a complex amplitude
ML estimate:
aˆ(τ , ω) a�1(τ , ω) (5.87)
pH1 (s a, τ , ω)
L(s a, τ , ω) =
pH0 (s)
(5.88)
1
2
2
= exp - L a + Re a Lβ P1(τ , ω){ } cosh(Re{ia Lα P2 (τ , ω)})
and the log-likelihood ratio using the complex amplitude ML estimate evaluates
to
1 2
{
log L(s τ , ω ) = - L aˆ (τ , ω) + Re aˆ (τ , ω ) Lβ P1(τ , ω ) +
2
}
(5.89)
( {
+ log cosh Re iaˆ (τ , ω ) Lα P2 (τ , ω ) })
The code-phase and Doppler ML estimates are obtained via maximizing
and the detector decides for H1 if the detector (in this case the log-likelihood ratio)
exceeds a suitable chosen threshold γ
Analyzing (5.86), the detector softly decides between the two data-bit possibili-
ties based on the tanh function. For high signal power levels, the tanh evaluates to
either +1 or –1, depending on the true data-bit value. Data and pilot information
are then properly combined and the data bit is removed for the complex ampli-
tude estimate. By contrast, if the signal power is low, the tanh evaluates to a small
number and the complex amplitude estimate is based only on the pilot correlator
value P1.
The presented method of integrating out the data-bit distribution can also be
applied to design signal tracking algorithms. The resulting code and phase discrimi-
nators are optimal and are discussed in an article by Wang [14].
pH1 (s a, τ , ω , d)
1 1 L 2
= L
exp - sµ - a(βc1(tµ - τ ) + αψ (d)c2 (tµ - τ ))exp{iω t µ }
(2π ) 2 µ =1
1 1 L 2 1 (5.92)
= L
exp - sµ - L a 2
(2π ) 2 µ =1 2
L
+ Re sµ a(βc1(tµ - τ) + αψ (d)c2 (tµ - τ))exp{iω t µ }
µ =1
(
L s a, τ ,ω , d = ) (
pH1 s a, τ ,ω , d )
pH0 (s)
L
1
= exp - L a2 + Re sµ a(βc1 (tµ - τ ) + α ψ (d )c2 (t µ - τ ))exp{iω t µ}
2 µ =1
(5.93)
and the log-likelihood ratio to
(
log L s a, τ ,ω , d )
1 L (5.94)
= - L a2 + Re sµ a(βc1( t µ - τ ) + αψ (d)c2 (tµ - τ ))exp{iω t µ}
2 µ =1
L
1
P(τ , ω , d) = sµ (β c1(tµ - τ) + αψ (d)c2 (tµ - τ ))exp{iω tµ } (5.95)
L µ =1
Similar to a single coherent integration of Section 5.7.1, the ML estimate for the
complex signal amplitude is obtained as
1
aˆ(τ , ω , d) = P(τ , ω , d) (5.97)
L
which can be inserted into the log-likelihood function (5.94), resulting in
1 1
P(τ , ω , d) + Re {P(τ , ω , d)P(τ , ω , d)} = P(τ , ω , d)
2 2
log L(s τ , ω , d) = - (5.98)
2 2
Based on this expression, the ML estimates for code phase, Doppler, and data-bit
value are obtained via maximization
2
τˆ ,ωˆ , dˆ = arg max log L(s τ, ω , d) = arg max log P(τ , ω , d) (5.99)
τ ,ω ,d τ , ω,d
and the detector decides for H1 if the squared correlator values exceed a threshold
γ, which is written as
2
P(τˆ ,ωˆ , dˆ ) > γ (5.100)
M
2 - pd k(1 - (1 - pfa ) G ) + 1
TACQ = ÷ NGDW (5.101)
2 pd ÷ MG
156 Signal Detection
Here, k is a penalty factor to characterize the time of the system to detect a false
acquisition. If a signal is falsely detected, the system is assumed to spend kDW sec-
onds to detect its error and then to continue with the normal acquisition procedure.
The mean acquisition time decreases with an increasing pd. However, an increased
pd is normally accompanied by a longer dwell time DW, which compensates for the
decrease.
Here, qd(γ ) is the single-bin probability density function of the detector under
H1, depending on a variable threshold γ . The symbol γ is the detector threshold
used to obtain the single-bin detection probability
pd = qd (γ )dγ (5.104)
γ =γ
log(1 - pfa )
S =N (5.105)
log(1 - π fa )
Here, N denotes the number of grid points per set involved in the above proce-
dure. This equation is directly derived from (5.102).
The effective number of bins used to calculate the mean acquisition time or the
system-detection probabilities are then given as
NG MG
NG = , MG = (5.106)
S S
The higher the correlation between the bins, the lower the number of effective
bins and the smaller the increase in the false-detection probability.
τ 0 + δτ / 2 ω 0 + δω / 2
1
Lτ , ω = 20log10 P(τ , ω) N
dτ dω ÷ (5.107)
δτδω P(τ 0 ,ω 0 ) ÷
N τ = τ0 - δτ / 2ω =ω 0 - δω / 2
where τ0, ω0 denote the true code-phase and Doppler values. The symbols δτ, δ ω
denote the code-phase and Doppler spacing. In the limit of infinitesimal spacing, the
code-phase and Doppler loss is 0 dB.
The code-phase and Doppler loss can be seen as an implementation loss and
effectively reduces the available power when detecting the signal.
The previous discussion clearly demonstrated that by using a long coherent integra-
tion time, nearly arbitrarily weak signals can be detected. The signal information
contained in each received sample is optimally exploited. For most practical detec-
tion implementations, the use of a coherent integration implies that the short-period
signal model of Section 1.8 is valid, which corresponds to a linear carrier-phase
dependency on time (e.g., a constant Doppler is required). In principle, coherent in-
tegration can also be performed over nonconstant Doppler signals, but this requires
external aiding information that provides the relative antenna motion—a so-called
m-trajectory—during the integration interval [17]. Long coherent integration also
requires pilot signals or a data wipe-off functionality. If a linear phase model is
considered, a number of physical effects limit the coherent integration time, spe-
cifically, transmitter and receiver clock jitter, nonlinear line-of-sight dynamics (e.g.,
user accelerations), or phase fluctuations in the propagation channel. Those effects
have been assessed, for example, in the articles by Sıçramaz Ayaz, López-Risueño,
and Niedermeier [17–19]. If a high-quality oscillator (e.g., OCXO) is used, if the
line-of-sight signal is received (and not a reflection), and if nonlinear line-of-sight
dynamics are not present or compensated for by an IMU, then the coherent inte-
gration time can be up to a few seconds. If a linear phase dependency can not be
assumed (and no aiding is available), then it is useful to split up the integration into
several shorter periods, assuming within each period a linear phase dependency. If
the carrier phases of each period are modeled as independent random variables, the
noncoherent integration scheme of Section 5.7.2 is optimal.
A more stringent requirement on the maximum duration of the coherent integra-
tion time is often imposed by computational limitations. In a first approximation the
number of bins to be searched increases linearly with the coherent integration time (if
the timing uncertainty is limited, the number of code phase bins is constant; the num-
ber of Doppler bins increases linearly). Furthermore, the dwell time within each bin
also increases linearly with the coherent integration time. By using sophisticated al-
gorithms, as described in Sections 9.5.5 and 9.5.6, the increase can be partly reduced
but, nevertheless, the computational burden is probably the most severe constraint.
A possibility to reduce the computational burden by reducing the number of
Doppler bins is to use coherent integration times that are below the maximum allow-
able value (i.e., multiple shorter coherent integrations are used instead of one long
5.10 Discussion 159
single coherent integration). The caveat of this method is a decreased sensitivity, usu-
ally named squaring loss. The squaring loss originates from the fact the correlation
values are squared before they are added to remove the carrier-phase dependency.
Squaring—as a nonlinear operation—reduces the overall performance compared to
the (linear) coherent integration. The squaring loss is occasionally attributed to a
reduced postdetection output signal-to-noise ratio value of the test statistics for the
squaring detector (5.65) compared to the coherent detector (5.17). The postdetec-
tion signal-to-noise ratio value can be used to compare different acquisition strate-
gies. The higher the signal-to-noise, the more sensitive is the detection scheme. In the
article by Borio, however, it has been shown that the output postdetection signal-to-
noise ratio (corresponding to the deflection coefficient) is unable to completely char-
acterize the acquisition performance; consequently, it should not be used to compare
different acquisition methods in a strict sense [20]. Instead, it is recommended to
work with the complete probability distribution and to define the squaring loss as
the C/N0 difference to achieve a certain probability of detection.
If the maximum coherent integration time cannot be exploited for computa-
tional limitations, differential acquisition schemes can be used. Differential acquisi-
tion schemes use, typically, half the coherent integration time (and thus reduce the
number of Doppler bins by two). They remove the carrier-phase dependency by a
multiplication of two correlator values, which are obtained by correlating timely
separated incoming signal sample batches; that is, an expression in the form
υ -1
S= Pn (τˆ , ωˆ )Pn +1(τˆ , ωˆ ) > γ (5.108)
n =1
is used instead of (5.63) as test statistics. It requires that two adjacent correlator
values depend similarly on the carrier phase to cancel the carrier-phase dependency
via multiplication. The expected value of each product is of zero mean in contrast
to a squared correlator value [used in (5.63)] which is of nonzero mean. Differential
correlation was first published in Zarrabizadeh and Sousa’s article and has been
introduced into satellite navigation in Park’s work [21, 22]. Theoretical investiga-
tions performed in articles by Ávila Rodríguez and Schmid demonstrate that the use
of differential correlation brings a gain of less than 3 dB compared to the squaring
scheme of Section 5.7.2.3 if the squaring scheme does not fully exploit the maximum
coherent integration time and uses the same (i.e., half of the maximum) integration
time as the differential scheme [23, 24]. The differential scheme can also be applied
by multiplying the nonadjacent correlator values with each other, as described by
Shanmugam [25]. The differential scheme is sensitive to data-bit transitions and, for
best performance, it is required that neither of the multiplicands is affected by a data-
bit transition. Losses caused by data-bit transitions are investigated in the works by
Schmid and Shanmugam [24, 25].
5.10 Discussion
This chapter presented several methods of how signal detection may occur in a nav-
igation receiver. It included “exotic” methods, such as the energy detector, which is
160 Signal Detection
easy to implement but does not yield approximate code-phase or Doppler values,
and detection in the position domain, which has high implementation complexity.
Established methods are based on signal detection in the pseudorange domain.
All those methods are characterized by their need to evaluate the correlation values
for a multitude of code-phase and Doppler values (called grid points). This evalua-
tion is computationally very demanding. However, especially for software receivers,
there exist a number of sophisticated frequency-domain methods that partly relieve
the computational demands. These will be discussed in Section 9.5.
Two different detection principles have been contrasted in this section: Bayesian
techniques and generalized likelihood-ratio detectors. Bayesian techniques generally
outperform generalized likelihood-ratio detectors or give at least the same perfor-
mance. The sensitivity gain of using Bayesian methods is, however, small (less than
1 dB for the data/pilot example) and depends on the characteristics of the available
a priori information. The more the a priori distribution deviates from a uniform
distribution, the higher the expected gain.
From a theoretical point of view, an optimum detection scheme does not exist;
that is, there is no formula for the test statistics, which give an optimum perfor-
mance independent of the true signal-parameter values. Only the clairvoyant detec-
tor is optimal, but this detector needs to know the true code-phase, Doppler, and,
eventually, amplitude values.
Overall, generalized likelihood-ratio detectors seem to give a reasonable per-
formance and their implementation is typically simpler than that of a Bayesian
detector. To design a good generalized likelihood-ratio detector the following points
should be kept in mind:
Shrink the search space to a minimum number of grid points based on all
information available to the navigation receiver.
Filter the signal to avoid fine code-phase grid spacings (see Section 5.3).
Look for an efficient implementation to evaluate the multitude of correlators
(see Section 9.5).
Find a good compromise among coherent integration time, number of non-
coherent integrations, and computational complexity (see Section 5.7.2.3).
Find a suitable acquisition strategy that balances the computational load and
simultaneously cycles optimally through the signals to be acquired.
Allow the algorithm to do a self-calibration to determine the threshold for a
given system false-detection probability.
References
[1] Poor, H. V., An Introduction to Signal Detection and Estimation, New York: Springer,
1988.
[2] Kay, S. M., Fundamentals of Statistical Signal Processing: Detection Theory, Englewood
Cliffs, NJ: Prentice Hall, 1998.
[3] Scharf, L. L., Statistical Signal Processing: Detection, Estimation, and Time Series Analysis,
Reading, MA: Addison-Wesley, 1991.
[4] Lehmann, E. L., Testing Statistical Hypothesis, 2nd ed., New York: Wiley, 1986.
[5] Betz, J. W., “Binary Offset Carrier Modulations for Radionavigation,” NAVIGATION,
Journal of The Institute of Navigation, Vol. 48, No. 4, 2001, pp. 227–246.
[6] Yang, C., J. Vasquez, and J. Chaffee, “Fast Direct P(Y)-Code Acquisition Using XFAST,”
Proc. 12th Int. Technical Meeting of the Satellite Division of the Institute of Navigation
(ION-GPS) 1999, Nashville, TN, September 14–17, 1999, pp. 317–321.
[7] Yang, C., et al., “Extended Replica Folding for Direct Acquisition of GPS P-Code and Its
Performance Analysis,” Proc. 13th Int. Technical Meeting of the Satellite Division of The
Institute of Navigation (ION-GPS) 2000, Salt Lake City, UT, September 19–22, 2000,
pp. 2070–2078.
[8] Li, H., M. Lu, and Z. Feng, “Mathematical Modelling and Performance Analysis for
Average-Based Rapid Search Method for Direct Global Position System Precision Code
Acquisition,” IET Radar, Sonar & Navigation, Vol. 3, No. 1, 2009, pp. 81–92.
[9] Lee, W. C. Y., Mobile Communications Engineering, New York: McGraw-Hill, 1982.
162 Signal Detection
[10] van Dierendonck, A. J., “GPS Receivers,” in Global Positioning System: Theory and Ap
plications, Vol. I, pp. 329–407, Parkinson, B. W., and J. J. Spilker, (eds.), Washington, D.C.:
American Institute of Aeronautics and Astronautics, Inc., 1996.
[11] Heinrichs, G., et al., “HIGAPS: A Large-Scale Integrated Combined Galileo/GPS Chipset
for the Consumer Market,” Proc. European Navigation Conference (ENC-GNNS) 2004,
Rotterdam, May 16–19, 2004.
[12] Verdú, S., Multiuser Detection, Cambridge, U.K.: Cambridge University Press, 1998.
[13] Glennon, E. P., et al., “Post Correlation CWI and Cross Correlation Mitigation Us-
ing Delayed PIC,” Proc. of the 20th Int. Technical Meeting of the Satellite Division
of the Institute of Navigation (ION-GNNS) 2007, Fort Worth, TX, September 25–28,
pp. 236–245.
[14] Wang, D., et al., “Optimum Tracking Loop Design for the New L2 Civil Signal Based
on ML Estimation,” Proc. 17th Int. Technical Meeting of the Satellite Division of the
Institute of Navigation (ION-GNNS) 2004, Long Beach, CA, September 21–24, 2004,
pp. 474 – 485.
[15] Borio, D., L. Camoriano, and L. Lo Presti, “Impact of Acquisition Searching Strategy on
the Detection and False Alarm Probabilities in a CDMA Receiver,” PLANS 2006, IEEE/
ION Position, Location and Navigation Symposium, San Diego, April 25–28, 2006,
pp. 1100–1107.
[16] Lozow, J. B., “Analysis of Direct P(Y)-Code Acquisition,” NAVIGATION, Journal of The
Institute of Navigation, Vol. 44, 1997, pp. 89–97.
[17] Niedermeier, H., et al., “Reproduction of User Motion and GNSS Signal Phase Signatures
Using MEMS INS and a Pedestrian Navigation System for HS-GNSS Applications,” Proc.
4th ESA Workshop on Satellite Navigation User Equipment Technologies, NAVITEC,
Noordwijk, the Netherlands, December 10–12, 2008.
[18] Sıçramaz Ayaz, A., T. Pany, and B. Eissfeller, “Performance of Assisted Acquisition of the
L2CL Code in a Multi-Frequency Software Receiver,” Proc. 20th Int. Technical Meeting of
the Satellite Division of the Institute of Navigation (ION-GNNS) 2007, Fort Worth, TX,
September 25–28, 2007, pp. 1830–1838.
[19] López-Risueńo, G., et al., “User Clock Impact On High Sensitivity GNSS Receivers,” Proc.
European Navigation Conference (ENC-GNNS) 2008, Toulouse, April 22–25, 2008.
[20] Borio, D., et al., “The Output SNR and Its Role in Quantifying GNSS Signal Acquisition
Performance,” Proc. European Navigation Conference (ENC-GNNS) 2008, Toulouse,
April 22–25, 2008.
[21] Zarrabizadeh, M. H., and E. S. Sousa, “A Differentially Coherent PN Code Acquisition
Receiver for CDMA Systems,” IEEE Trans. Commun., Vol. 45, 1997, pp. 1456–1465.
[22] Park, S. H., et al., “A Novel GPS Initial Synchronization Scheme Using Decomposed Dif-
ferential Matched Filter,” Proc. Institute of Navigation National Technical Meeting (ION-
NTM) 2002, San Diego, CA, January 28–30, 2002, pp. 246–253.
[23] Ávila Rodríguez, J. Á., T. Pany, and B. Eissfeller, “A Theoretical Analysis of Acquisition
Algorithms for Indoor Positioning,” Proc. 2nd ESA Workshop on Satellite Navigation
User Equipment Technologies, NAVITEC, Noordwijk, the Netherlands, December 8–10,
2004.
[24] Schmid, A., and A. Neubauer, “Performance Evaluation of Differential Correlation for
Single Shot Measurement Positioning,” Proc. 17th Int. Technical Meeting of the Satellite
Division of the Institute of Navigation (ION-GNNS) 2004, Long Beach, CA, September
21–24, 2004, pp. 1998–2009.
[25] Shanmugam, S. K., J. Nielsen, and G. Lachapelle, “Enhanced Differential Detection Scheme
for Weak GPS Signal Acquisition,” Proc. 20th Int. Technical Meeting of the Satellite Divi
sion of the Institute of Navigation (ION-GNNS) 2007, Fort Worth, TX, September 25–28,
2007, pp. 189–202.
[26] SiRF Technology, Inc., http://www.sirf.com, 2008.
chapter 6
Sample Preprocessing
163
164 Sample Preprocessing
α0 β 0 s β1
α1 β1 < s β 2
q(s) = (6.1)
�
α B-1 βB-1 < s βB
For a symmetric n-bit ADC, the threshold and output values are given as
2b - B + 1
αb = δ 0 b<B
2
β0 = -
(6.3)
2b - B
βb = δ 0<b<B
2
βB =
Here, δ represents the ADC input step width, which can be adjusted to opti-
mally fit the analog signal. The range ± D of analog input signal levels, which are
covered uniformly by the ADC, is given by half the number of output levels multi-
plied by the input step width:
B
D=δ (6.4)
2
The ADC output values are internally represented as integer numbers. Usually
the signal processing has no direct access to δ. If the signal processing works with
two-complement integers with a number of bits larger than log2(2B) (for example,
Chapter 9 describes algorithms working with 16 bits), it is reasonable to represent
the values using a scheme such as
where the expected value is computed with respect to the probability density func-
tion p(sm) of the sample amplitude.
6.1 ADC Quantization 165
S = aR + N (6.7)
Note that mathematical symbols with capital letters denote random variables,
whereas lower-case letters denote the realization of random variables. All signals
are assumed to be real-valued. The signal process is assumed to be a Gaussian
process. A Gaussian signal process is a reasonable assumption if, for example, the
superposition of all transmitted GNSS signals is considered. On the other hand, it
should be noted that the subsequent computations can also be carried out using
other probability density functions. For example, CW interference is modeled as a
(1 – r 2)–1/2 probability density function. The statistics of the Gaussian signal process
are given as
Rµ =0
R
Rµ R ν = Rrr (µ - ν) (6.8)
R
1 rµ2
p(rµ ) = exp -
2π 2
1 1 rµ
p(rµ , rν) = exp - (rµ rν)Q-1 (6.9)
2π det Q 2 rν ÷
1 Rrr (µ - ν)
Q= ÷
Rrr (µ - ν) 1
Nµ =0
N
(6.10)
N µ Nν = δ µ,ν
N
166 Sample Preprocessing
1 nµ2
p(nµ ) = exp -
2π 2
(6.11)
1 nµ2 nν2
p(nµ , nν ) = exp - -
2π 2 2
The signal power is given by α 2 and the corresponding C/N0 value as (see Sec-
tion A.2.2)
fs a2
C / N0 = (6.12)
2
The matched filter output operating on the quantized signal samples is given
as
L
t = q(sT ) × r = q(sµ )rµ (6.13)
µ=1
The relative SNR of the matched filter is defined as the ratio of the squared expected
value divided by the variance of the matched filter as
2
T
2 q(ST ) × R
R,N R,N
SNRrel = = (6.15)
(q(S )
var T 2 2
T
R,N )× R - q(ST ) × R
R,N
R,N
The relative SNR is related to the bit error rate and to the variability in time of
signal power estimates. The relative SNR stands in analogy to the definition used
in [7].
The absolute SNR of the matched filter is defined as the ratio of the squared ex-
pected value divided by the variance of the matched filter in the noise-only case as
2
q(ST ) × R
R,N
SNRabs = 2 (6.16)
T 2
(q(N ) × R) - q(NT ) × R
R,N R,N
The absolute SNR relates to the noncentrality parameters determining the ac-
quisition sensitivity described in Section 5.4.
6.1 ADC Quantization 167
Both SNR values are defined at baseband and no carrier is considered. Includ-
ing the carrier is not straightforward and, eventually, Monte Carlo methods must
be used [7]. As a consequence, the relationship between the SNR values and the
high-rate pseudorange variances can only be given for special cases. For example,
if a complex-valued baseband input signal is considered and the I-channel contains
the signal plus the noise and the Q-channel contains only the noise, then the vari-
ance of the estimated carrier phase is related to the absolute SNR.
L
q(ST ) × R = q(Sµ )Rµ
R,N R,N
µ =1
(6.17)
L L
= q(Sµ ) Rµ Rµ = � µ Rµ
R �0 R0
=L R
N R R R
µ =1 µ =1
where the conditional expected value with respect to the noise of a quantized sam-
ple under the assumption of a constant signal value is defined as
an expression that can be evaluated analytically for Gaussian noise. The result-
ing expressions are, however, lengthy and do not provide more insight; they are,
consequently, not given here. For non-Gaussian noise, the expression can also be
evaluated with numerical integration.
The conditional expected value with respect to noise can also be considered
as a random variable (actually a function of the random variable Rm) and is then
mathematically written as
In cases where no signal is present, the first- and second-order moments of the
quantized noise are defined as
B-1 βb+1
n�µ = q(Nµ ) = α b p(nµ )dnµ
N
b = 0 nµ = βb
(6.21)
B-1 βb+1
�
nµ2 = q(N µ )2 = α b2 p(nµ )dnµ
N
b = 0 nµ = βb
L
(q(ST ) × R)2 = q(Sµ )Rµ q(Sν )Rν
R,N R,N
µ ,ν =1
L L
� µ Rµ R
�ν Rν �
= R + Rµ2 Rµ2
R R
µ ,ν =1 µ =1
µ ν
(6.23)
L
=
µ ,ν =1
( R� R
µ µ
R
�ν Rν
R
R
+ Rµ Rν
R
�µ R
R �ν
R
�µ Rν
+ R
R
�ν
Rµ R
R )
µ ν
�
+ L R02 R02
R
The last line uses a relationship for the fourth moment of correlated Gaussian
random variables. Assuming the signal samples Rm and Rν are uncorrelated if |m – ν|
> L′, an auxiliary index λ = m – ν is introduced; and thus,
6.1 ADC Quantization 169
(q(ST ) × R)2
R,N
�0 R0 2 �
= (L2 - L) R + L R02 R02
R R
L L
+ Rν + λ Rν �ν +λ R
R �ν �ν + λ Rν
+ R �ν
Rν + λR
R R R R
ν =1 λ =- L ,
λ 0 (6.24)
L
�0 R0 2 � �λ R
�0 �λ R0 �0
= (L2 - L) R + L R02 R02 +L Rλ R0 R
R + R Rλ R
R R R R R
λ =- L ,
λ 0
L
�0 R0 2 � �λ R
�0 �λ R0 2
= (L2 - L) R + L R02 R02 + 2L Rλ R0 R
R + R
R R R R
λ =1
SNRrel =
2
�0R0
L2 R
= R
L
�0R0 2 � �λ R
�0 � λ R0 2
�0R0 2
(L2 - L) R + L R02R02 + 2L Rλ R0 R
R + R - L2 R
R R R R
R λ =1 (6.25)
2
�0R0
L2 R
= r
L
� �0R0 2
�λ R
�0 � λ R0 2
L R02R02 -L R + 2L Rλ R0 R
R + R
R R R R
λ =1
2
�0 R0
L2 R
R
SNRabs = L
� 2 2
L n02 R02 - L n�0 R0 R
+ 2L Rλ R0 R
n�λ n�0 R
+ n�λ R0 R
R λ =1
(6.26)
2 2 2
�0 R0
2
L R 2
L R �0 R0 �0 R0
L R
R R R
= = =
� � �
L n02 R02 Ln02 R02 n02
R R
� µ = aR µ
R
�
Rµ2 = 1 + a2R2µ
�0R0
R = a R02 =a
R R
�
R02R02
R
(
= 1 + a2R02 R02 ) R
= 1 + 3a2
(6.27)
a2 Rλ R0 �λ R
= R �0 � λ R0
=a R = a2Rrr (λ)
R R R
�
n02 =1
R
Under this assumption, idealized SNR values can be obtained, which serve as
comparison values for the quantized SNR values. The difference between idealized
SNR values and quantized SNR values expressed in decibels gives the quantization
loss.
The idealized relative SNR is given as
L2 a2 La2
SNRrel , ideal = L
= L
(6.28)
2 2 2 2 2 2 2
L(1 + 3a ) - La + 4La Rrr (λ) 1 + 2 a + 4a Rrr (λ )
λ =1 λ =1
cause the total power Ptot is the sum of the signal power C and the noise power N
and is written as
N C / N0
Ptot = C + N = C /N0 + N = N 1+ ÷ (6.32)
B B
Throughout this work, it is usually assumed that the noise variance is one for real-
valued noise and two for complex-valued noise. However, Section 6.1 showed that,
after ADC, the noise power is generally different from one for real-valued noise.
From the example above, we could see that for a 2-bit ADC and D = 2, the quan-
tized noise power is 3.53 for an optimal quantization of a real-valued input signal.
The difference between the true and the assumed noise variance has some (minor)
consequences on the signal processing, which are discussed here. Signal processing
relies on correlation values and the subsequent evaluation of them. Two cases shall
be considered.
In the first case, only ratios of correlators are processed. This is done to obtain
the code-phase, Doppler, or the carrier-phase estimates, as described in Section 4.3.
In this case, the formulas can also be directly applied if the true noise power differs
from one (or, respectively, two). The nominator and denominator scale identically
and the effect of the different noise power cancels.
The second case relates to signal power estimation and determination of thresh-
olds for signal detection. For example, a single coherent integration, as described
in Section 5.7.1, compares the squared correlator value against a threshold γ. For
convenience, the respective equation (5.13) shall be repeated here
2
P(τ , ω) > γ (6.33)
The standard operation mode of a GNSS receiver implies that the noise floor is
constant in time. For certain GNSS frequency bands (e.g., Galileo E5 or E6), this
6.3 ADC Requirements for Pulse Blanking 175
assumption is not valid and pulsed interference occurs. Pulsed interference might
also occur because of the presence of pseudolite signals, which are described in Sec-
tion 10.6.2.
As outlined in Section 10.6.2, pulse blanking is an effective countermeasure to
mitigate the effect of the pulsed interference, as long as the pulses are received only
for a small fraction of the time. In the following, some requirements for the ADC
are formulated to ensure that pulse blanking works properly:
1. The front-end gain before the ADC must not react on the pulse (or least only
very slowly).
2. ADC resolution and thresholds must be suitable to allow identification of
pulses via an energy detector.
3. Noise-floor determination algorithms must exclude periods where pulses are
present.
If these conditions are fulfilled, the algorithm of Section 10.6.2 can be applied.
Replacing the signal samples by “0” ensures that correlation values are mini-
mally distorted. On the other hand, noise-floor determination can be corrupted if
an estimation of n�2
µ is carried out using blanked samples. If it is not possible for the
noise-floor estimation to be aware of periods with blanked samples, the method of
pulse clipping
176 Sample Preprocessing
sµ s µ < smax
sµ = µ pulse period (6.36)
smax sgn(sµ ) sµ smax
�
with smax = n 2µ or sample randomization
�
sµ = nµ nµ ∼ N(0, nµ2 ) µ pulse period (6.37)
can be applied, where either the samples are clipped if their magnitude exceeds a
certain value, or the samples are replaced by simulated noise samples. Both methods
further degrade the parameter estimates but allow continuous noise-floor estima-
tion. Pulse clipping is automatically applied if a 1-bit ADC is used.
In the case of an infinite number of bits, the expected value corresponds to the
interference power, which is written as
� �
aˆ 2 = r02 - n02 a2 (6.39)
R, N R B
a2 fs (6.40)
J / N0 =
2
Nµ Nυ = 0, Nµ Nυ = 0, N µ N υ = 2Qµ ,υ (6.45)
N N N
Qµ , υ = Q(µ - υ) (6.46)
s = FQ-1 2s Q1 2F*s = s
(6.47)
r(qb ) = FQ-1 2 r(qb ) Q1 2F*r(qb ) = r(qb )
The filter FQ–1/2 affects the signal and the noise part of the received-signal
samples in the same way. Because the received-signal part is changed, the reference
samples have to be adapted properly. The noise part Nm′ of the filtered samples is
white because
*
NN = FQ-1 2 NN*Q-1 2F* = FQ-1 2 NN* Q-1 2F*
N N N
(6.49)
-1 2 * -1 2 * *
= FQ QQ F = 2FF = 21
s = FQ-1 2s
(6.50)
sµ = F(µ - λ)Q-1 2 (λ - ν)sν
λ ,ν
can actually be realized by any digital filter. The Q–1/2 term defines the magnitude
response of the filter and F defines the phase response. Because F can be chosen
arbitrarily and magnitude variations are usually small, a linear-phase finite impulse
response (FIR) filter as described in the book by Diniz, da Silva, and Netto [8] could
be an efficient way for implementation.
and the sufficient statistic is obtained via simple correlation of the received-signal
samples with the modified reference signal
The received signal itself remains unmodified and contains colored noise.
The noise of the modified signal has as covariance the inverse matrix of the
unmodified signal; that is,
*
NN = Q-1NN*Q-1 = Q-1 NN* Q-1
N N N
(6.55)
-1 -1 -1
= Q 2QQ = 2Q
Effectively, the noise-power spectral density of the modified signal is the inverse
of the power spectral density of the unmodified signal. The filter compensates for
the nonwhite noise-power spectral density plus for the incorrectly chosen reference
signal. Both effects add up and the filtered signal can be thought of as “overcom-
pensated.”
The presented signal-processing algorithms have all been formulated in discrete time,
based on a finite sample rate fs. The deterministic part of the signal, the reference sig-
nals, and the sin/cos signals were all based on continuous time signals sampled with
182 Sample Preprocessing
• fs = 2B —Nyquist sampling: Signal and noise are sampled with the Nyquist
rate. Ideal case, no losses, u = 1.
• fs < 2B —Sub-Nyquist sampling: Aliasing of noise occurs, the signal autocor-
relation function is not affected by the sub-Nyquist sampling, u = 2B/fs.
References
[1] Amoroso, F., and Bricker, J. L., “Performance of the Adaptive A/D Converter in Com-
bined CW and Gaussian Interference,” IEEE Trans. Commun., Vol. 34, No. 3, 1986,
pp. 209–213.
[2] Proakis, J. G. and D. G. Manolakis, Digital Signal Processing, Principles, Algorithms, and
Applications, 4th ed., Upper Saddle River, NJ: Prentice-Hall, 2007.
[3] van Dierendonck, A. J., “GPS Receivers,” in Global Positioning System: Theory and Ap-
plications, Vol. I, pp. 329–407, Parkinson, B. W., and J. J. Spilker, (eds.), Washington D.C.:
American Institute of Aeronautics and Astronautics Inc., 1996.
[4] Chang, H., “Presampling Filtering, Sampling and Quantization Effects on Digital Matched Fil-
ter Performance,” Proc. Int. Telemetering Conference, San Diego, CA, 1982, pp. 889–915.
[5] Spilker, J. J. Jr. and Natali, F. D., “Interference Effects and Mitigation Techniques,” in
Global Positioning System: Theory and Applications, Vol. I, pp. 717–771, Parkinson, B. W.,
and J. J. Spilker, (eds.), Washington, D.C.: American Institute of Aeronautics and Astro-
nautics Inc., 1996.
6.5 Sub-Nyquist Sampling 183
[6] Betz, J. W., “Bandlimiting, Sampling, and Quantization for Modernized Spreading Mod-
ulations in White Noise,” Proc. Institute of Navigation National Technical Meeting
(ION-NTM) 2008, San Diego, CA, January 28–30, 2008.
[7] Betz, J. W. and Shnidman, N. R., “Receiver Processing Losses with Bandlimiting and One-
Bit Sampling,” Proc. 20th Int. Technical Meeting of the Satellite Division of the Institute of
Navigation (ION-GNSS) 2007, Fort Worth, TX, September 25–28, 2007, pp. 1244 –1256.
[8] Diniz, P. S. R., E. A. B. da Silva, and S. L. Netto, Digital Signal Processing, System Analysis
and Design, Cambridge, U.K.: Cambridge University Press, 2002.
[9] Pany, T., and B. Eissfeller, “Code and Phase Tracking of Generic PRN Signals with Sub-
Nyquist Sample Rates,” NAVIGATION, Journal of The Institute of Navigation, Vol. 51,
No. 2, 2004, pp. 143–159.
chapter 7
Correlators
The correlator-based tracking scheme relies on the fact that a hardware receiver
has only limited signal-generation capabilities but can work at a high sample rate
and allows a high level of parallelization. Typically, a simple hardware GNSS re-
ceiver is only able to generate a carrier and binary PRN-code sequence. The product
of both approximates the received signal r(t). The PRN-code sequence is shifted
several times by an integer number of samples and the shifted PRN-code replicas
are correlated with the received signal at baseband (see Figure 7.1). The result-
ing correlators C0, …, Cb have different sample (or code-phase) shift values and
are termed “prompt,” “early,” “late” (or “very early/late”). For basic tracking,
three correlators are required (prompt, early, and late), but depending on the used
multipath-mitigation scheme and on the signal-modulation scheme, significantly
more correlators might be required.
For a software receiver that uses the multibit-correlation approach discussed in
Section 9.4, it is reasonable to exploit this increased amplitude resolution to allow
correlation with more complex reference signals instead of correlation only with
the PRN-code sequence. The reference signals can then be optimized for certain
criteria, as will be discussed in Section 8.2.
In the rest of this chapter, different correlators will be considered that corre-
late the received-signal samples with different classes (P, D, F, and W) of reference
185
7.2 Generic Correlator 187
The analysis of a generic correlator is based on the assumption of Section 1.8. The
sequence of received-signal samples Sm is the sum of a deterministic signal rm plus
noise Nm given by
Sµ = rµ + N µ µ = 1,…, L (7.1)
L+1
rµ = ac(tµ - τ )exp i ω tµ - - ϕ÷ (7.2)
2 fs ÷
where a is the signal amplitude, w is the angular IF plus Doppler in radians per sec-
ond, j is the carrier-phase offset in radians, and t is the code delay in seconds. The
signal is sampled at times tm
µ t0 = 0
tµ = (7.3)
fs tL = Tcoh
where fs is the sample rate in samples per second. The function c(t) represents the
received and filtered navigation signal at baseband and is allowed to be complex-
valued. The stochastic part Nm shall be left unspecified now, but it typically includes
thermal and quantization noise.
Furthermore,
L+1
tµ = tµ - (7.4)
2 fs
is defined for a more compact notation.
The received signal is characterized by four fundamental signal parameters: a,
t, w, and j. Those parameters are assumed constant during the integration period
Tcoh. If the broadcast GNSS signal contains a navigation data message, the message
bit/symbol is assumed constant during the integration period and can be included
in the carrier phase j.
The received samples are correlated with an internally generated sequence of
samples given by
rrec, µ = crec (tµ - τ 0 )exp{i(ω 0tµ - ϕ 0 )} (7.5)
For the different classes of correlators (P, D, F, and W), the internal sequence is
based on a different baseband signal crec. We will later substitute different signals,
including the PRN code itself and early-late replicas, among others. Its fundamental
parameters are denoted by a subscript “0.” Those parameters are typically near the
true signal parameters. They define the correlation point and should not be con-
fused with the estimated signal parameters. Especially in the case of a multicorrela-
tor, the estimated parameters and the correlation point differ significantly.
The output of the correlator C is defined as
L L L
C(Sµ ) = rrec, µ Sµ = rrec, µ rµ + rrec, µ N µ = Csig (r µ) + Cnoise (N µ ) (7.6)
µ =1 µ =1 µ =1
188 Correlators
and is split into a deterministic signal part Csig and a stochastic noise part Cnoise.
The integration (summation) involves a number of L samples corresponding to a
coherent integration time of Tcoh = L/fs.
Within this section, all expected values are calculated with respect to noise; that
is,
… …N (7.7)
If the input signal has only a real component, we write the correlator output
as
1
C(Re{Sµ }) = C(Sµ ) + C(S µ ) (7.8)
2
For complex signals (whose center frequency might be around 0 MHz) the
above equation does not hold.
Repeating the same derivation with the complex-conjugate received signal
gives
Csig (rµ ) =
L L
= rrec, µ r µ = a c (t µ - τ )crec (t µ - τ 0 )exp{- i(ω tµ - ϕ) + i(ω 0t µ - ϕ 0 )} (7.13)
µ =1 µ =1
For a real-valued input signal, the correlation results for the normal and the
complex-conjugate input signal must be summed. Because correlation is linear, we
obtain under the assumption |mod(w + w0, 2p fs)| >> 0
1
Csig (Re{rµ }) = Csig (rµ) + Csig (rµ)
2
(7.14)
a
= Tcoh fs Rc ,crec (τ - τ 0 )exp{i(ϕ - ϕ0 )}κ (ω 0 - ω)
2
If a real-valued input signal is considered and |mod(w + w0, 2p fs)| >> 0 then the
expected value is
aTcoh fs Rc ,crec (τ - τ 0 )
C(Re{Sµ }) exp{i(ϕ - ϕ 0 )}κ (ω 0 - ω) (7.17)
2
because k(w) typically vanishes for large frequencies (see Appendix A.4.4).
7.2.2 Covariance
To evaluate the covariance, assume two different correlators that are distinguished
by an index “a” and the index “b.” Both correlators are associated with different
parameters and internal signals, specifically
190 Correlators
C a (Sµ ) � crec
a
, τ 0a ,ω 0a , ϕ 0a
(7.18)
b b
C (Sµ ) � crec , τ 0b , ω 0b ,ϕ 0b
= C a (N µ )C b (N µ )
because the signal contribution cancels and the expected value of the noise contri-
bution vanishes. Thus
cov C a (Sµ ), C b (S µ ) =
L
=
µ ,υ =1
a
N µ crec (t µ - τ 0a )exp{i(ω 0a tµ - ϕ 0a )}Nυ crec
b
{
(t υ - τ 0b )exp i(ω 0bt υ - ϕ0b ) }
(7.20)
L
a
=2 crec (t µ - τ 0a )crec
b
(t µ - τ 0b )exp{itµ (ω 0b - ω 0a ) - i(ϕ0b - ϕ0a )}
µ =1
because, for the following, we assume that (1.16) and (1.28) hold. The covariance
for the complex-conjugated input signal is given similarly as
( )
cov C a S µ ,C b S µ ( ) ( ( )
= C a Sµ - C a Sµ ( ) ) (C b (S µ) - C b (S µ ) )
= C a (N µ)C b (N µ )
L
=
µ ,υ =1
a
N µ c rec ( ) {( )} (
t µ - τ 0a exp i ω 0at µ - ϕ 0a Nυ c brec tυ - τ 0b exp i ω 0btυ - ϕ 0b ) {( )} (7.21)
L
=2
µ =1
a
c rec ( ) ( ) { (
t µ - τ 0a c brec t µ - τ 0b exp it µ ω 0b - ω 0a - i ϕ 0b - ϕ 0a ) ( )}
= cov C a (S µ ),C b (S µ)
7.3 Correlator Types with Illustration 191
= C a (Nµ)C b(Nµ)
L (7.22)
=
µ , υ=1
a
Nµ crec ( tµ - τ 0a ) exp{i( ω 0at µ - ϕ 0a)} b
Nυ crec ( ) exp{i(
tυ - τ 0b )}
ω 0b t υ - ϕ 0b
L
=2
µ =1
a
crec( ) (
tµ - τ 0a c rec
b
) { (
tµ - τ 0b exp it µ ω b0 - ω a0 - i ϕb0 - ϕ a0 ) ( )} N N
µ υ =0
Nµ Nυ = Nµ N υ = 0 (7.23)
1
cov C a (Re{Sµ }), C b (Re{S µ }) = cov C a (S µ + S µ ), C b (S µ + S µ )
4
1
= cov C a (Sµ ), C b (Sµ ) (7.24)
2
Depending on the choice of the internally generated reference signal, many differ-
ent types of correlators can be obtained. They are discussed in Section 7.3.1. For
simplicity, the case of real-valued received signal samples is discussed.
192 Correlators
7.3.1 P-Correlator
P-correlators are based on a transmitted-like reference signal and are used to deter-
mine carrier-phase tracking errors to extract navigation data symbols/bits; they are
used in the acquisition process (perhaps using FFT techniques for correlation). A
P-correlator reference signal cP(t) has to fulfill the requirement
Rc ,cP (0) 0 (7.27)
There exist different possibilities for the choice of the cP(t). If, for the P-correla-
tor CP, we use as internally generated signal the received signal at baseband itself,
cP (t) = c(t) (7.28)
then the CRLB (assuming no multipath is present) for the carrier-phase discrimina-
tor will be reached and optimal performance for navigation bit/symbol decoding
and acquisition will be reached as well.
For this setting (7.28), the expected value and variance evaluate to
CP (Re{Sµ })
( ) (
cov CPa Re{S µ }; τ 0a , CPb Re{S µ };τ 0b ) (
= 2BTcohRcP ,cP τ 0a - τ 0b) (7.30)
The P-correlator values are, in general, highly correlated for similar code-phase
offsets. If both correlators are based on different angular frequencies w 0a and w 0b,
they are uncorrelated if k (w 0a – w 0b) = 0.
In practice, for different reasons, it might be convenient to choose a slightly
different form of cP(t) from c(t). For the case of GNSS signals, and if simplicity is
required, the infinite-bandwidth representation of the PRN code is a good choice,
because for most signals it can be represented by a 1-bit amplitude resolution. For
carrier-phase multipath mitigation, a linear combination of the form “2P-E-L” of
the infinite-bandwidth baseband representation of c(t) may prove advantageous [1].
Assuming, however, an arbitrary waveform cP(t) for the P-correlator, the stochastic
properties are summarized as
7.3.2 F-Correlator
The F-correlator is used to determine Doppler-frequency tracking errors, a task usu-
ally performed in many GNSS receivers by differencing two carrier-phase estimates
in time [2]. Using a dedicated correlator for this estimation can be advantageous be-
cause if Doppler frequency is estimated in a single step, the CRLB for the estimated
frequency is achieved and the pull-in region is increased [3]. Furthermore, the com-
putation of the F-correlator does not require significant extra computational load
in a software receiver because the linear function multiplying the reference signal in
(7.32) can be approximated by a piecewise constant function. The correlation loop
of Section 9.4 can be split into several smaller loops, thereby allowing computation
of the P- and F-correlator in one turn.
The F-correlator derives from the LSQ Doppler estimation strategy of Section 4.3
and the reference signal is given by a P-correlator reference signal multiplied with time:
T
cF (t) = t - coh ÷ cP (t) (7.32)
2
The resulting F-correlator reference signal must still allow the separation of
code and carrier correlation, as described in Section 1.8.5.
The expected value of the F-correlator is
CF (Re{Sµ }) = -i CP (Re{S µ })
ω0 ÷
( ) (
cov CF Re{Sµ };ω 0a , CF Re{S µ }; ω 0b ) =
= cov -i
ω 0a
( a
÷ CP Re{Sµ }; ω 0 , -i) ω 0b
( b
÷ CP Re{S µ }; ω 0 )
(7.34)
=-
ω 0a ω0b
cov (
CP Re{Sµ }; ω 0a ) (
, CP Re{S µ }; ω 0b )
( ) {(
= -2BTcohRcP ,cP τ 0a - τ 0b exp i ϕ0a - ϕ 0b )} ω 0a ω 0b
(
κ ω0b - ω 0a ÷ )
Because of the discussion of Section 1.8.5,
1
= 2BTcoh RcP ,cP (0)Tcoh2 χ freq
12
194 Correlators
7.3.3 D-Correlator
The D-correlator is used to determine the code-phase tracking error. In the most
simple case, it is an early-minus-late replica of the infinite-bandwidth baseband
representation of c(t). For code multipath mitigation, a double-delta linear combi-
nation or a shaping correlator is known to be useful [1].
Generally, we require for the D-correlator reference signal cD(t)
Here, cP(t) denotes the P-correlator reference signal and the c(t) received
signal. The symbol l is the required size of the code-phase linearity region.
If we choose for the D-correlator CD as an internally generated signal the first
derivative of the received signal at baseband:
CD (Re{Sµ })
a
cov CD (
Re{Sµ };τ 0a , CD
b
) (
Re{S µ };τ 0b )
= -2BTcohRc ,c (τ - τ ) exp{i(ϕ - ϕ )}κ (ω
a
0
b
0
a
0
b
0
b
0 )
- ω 0a
For an arbitrary choice of the D-correlator reference signal, (7.17) and (7.24)
are used to obtain stochastic correlator properties.
7.3.4 W-Correlator
The main purpose of the W-correlator is its use in CDMA signal-quality monitoring
or multipath mitigation. If it is used within a multicorrelator, it gives uncorrelated
measurements of the received-signal chip form. It stands in analogy to the so-called
Vision correlator [4].
The reference signal for the W-correlator is defined as the PRN-code sequence
of the received signal convoluted with an approximation of Dirac’s delta function.
Let us assume that the received-CDMA signal at baseband takes the form
NPRN
c(t) = hkm(tfc - k) for 0 t< NPRN fc (7.39)
k =1
where NPRN denotes the length of the PRN code and fc represents the code rate in
chip per second. The function m defines the used-modulation scheme [e.g., BPSK or
7.3 Correlator Types with Illustration 195
(M)BOC for GNSS signals], including filtering, and hk is the PRN code spreading
sequence. For example, if an infinite-bandwidth BPSK signal is considered, m takes
the form
1 0 τ� < 1
m(τ�) = (7.40)
0 otherwise
approximates Dirac’s delta function. Note that, for numerical reasons, the am-
plitude is set to 1 instead of choosing 1/e. Furthermore, it should be noted that
if the reference signal and the PRN code is periodically extended for t < 0 or
t NPRN / fc, then
T
1
Rc ,cW (τ ) = lim c (t - τ )crec (t)dt
T 2T -T
T
1
= lim hkm ((t - τ ) fc - k) hlδ ε (tfc - l)dt
T 2T -T k =- l =-
T (7.44)
1
= lim hkhl m ((t - τ )fc - k)δε(tfc - l)dt
T 2T k,l =- -T
N N / fc
fc
= lim hkhl m ((t - τ )fc - k )δε (tfc - l)dt
N 2N k,l =- N -N / f c
where in the last step a common integration and summation interval have been
introduced. The last step is valid in the limit T for a fixed value of t. For sim-
plicity, let us now assume that the approximate delta function behaves like Dirac’s
delta function, namely
T
ε b
y(t)δ ε (at - b)dt y a if - aT b aT (7.45)
-T
a
196 Correlators
Then
N
ε
Rc ,cW (τ ) lim hkhl m ((l fc - τ )fc - k)
N 2N k,l =- N
ε N -k+ N
= lim hkhk+ l m(l - τ fc )
N 2N k=- N l =-k - N
N
ε
= lim m(l - τ fc ) hkhk+ l (7.46)
N 2N l =- k =- N
ε
= lim m(l - τ fc )(2N + 1)Rprn (l)
N 2N l =-
=ε m (l - τ fc)Rprn (l)
l =-
where Rprn(l) is the normalized autocorrelation function of the PRN code with
Rprn(0) = 1. Assuming an ideal PRN code with all other values vanishing, we
obtain
Rc ,cW (τ ) ε m(-τ fc ) (7.47)
Thus, the W-correlator allows measuring the chip waveform (including filter-
ing) of the used CDMA modulation scheme.
We obtain
a
cov CW (
Re{Sµ }; τ 0a , CW
b
) (
Re{S µ };τ 0b ) (7.49)
(( ) ) {(
= 2BTcohεδε τ 0b - τ 0a fc exp i ϕ0a - ϕ 0b )} κ (ω b
0 - ω0a )
If the difference in the code phase expressed in chips exceeds e, then the two
W-correlators are uncorrelated. For many signal-analysis purposes, this is a major
advantage compared to the P-correlator. The value of e should be chosen care-
fully because the SNR (i.e., the ratio between the squared expected value divided by
the variance) depends linearly on e. If the value of e is too small, it will make the W-
correlator output noisy. For a multicorrelator, a good choice for e is the multicorrela-
tor spacing.
7.4 Difference Correlators 197
P-correlators, as described in the Section 7.3.3, can act as sufficient statistics and
are a possible basis upon which to estimate the fundamental signal parameter’s code
phase, Doppler, carrier phase, and amplitude. On the contrary, difference correlat
ors are a suitable basis when difference signal parameters (code phase and carrier
phase) are of interest. Difference correlators are defined as the product of two cor-
relators taking one as a complex-conjugate value.
Difference correlators are in two GNSS applications of importance: double-
difference tracking and P-code-aided cross-correlation tracking of the encrypted GPS
P(Y)-code. Double-difference tracking is described in Section 10.5.2. Cross-correlation
techniques to track the encrypted P(Y)-code are summarized by Woo [5]. Using C/A-
code-based aiding, the Y-code is tracked on L1 and L2 using very short (2-ms) coher-
ent integration intervals aligned with an unknown W-code chip. Due to the C/A-code
aiding, the Y-code correlation point is kept within the linear region (in the code phase
and Doppler), but the correlator values contain the W-code chip and are too noisy
to allow carrier-phase estimation and unwrapping. Instead, a W-code independent
L1–L2 difference correlator is formed and the L1–L2 carrier phase difference is esti-
mated, filtered, and unwrapped (note: the effective L1–L2 wavelength is 86 cm, being
3.5 times larger than the L2 wavelength). Adding the L1–L2 carrier-phase difference
to the C/A-code-based L1 carrier phase gives an L2 carrier-phase estimate.
In Section 7.4.1, the thermal-noise performance of single-difference and in Sec-
tion 7.4.2 double-difference prompt correlators will be investigated.
To form the difference correlators, in the first step, the P-correlator is compen-
sated for internal tracking effects to obtain a correlator value related only to the
received-carrier phase [note: the expected P-correlator value of (7.50) is related to
the difference of the received and internal carrier phase]; that is,
198 Correlators
{
Pk,n = CPk,n exp iϕ0k,n } (7.52)
( {
DPk = d k 2(C / N0 )k,m BTcoh exp iϕ k,m + CPk,,m
noise } ) (7.54)
(d k
{ }
2(C / N0 )k,n BTcoh exp iϕ k,n + CPk,,nnoise )
and the deterministic part of the single-difference correlator evaluates to
k
DPsig {(
= 2BTcoh2 (C / N0 )k,m (C / N0 )k,n exp i ϕ k,m - ϕ k,n )}
(7.55)
= 2BTcoh2 (C / N0 )k,m (C / N0 )k,n exp i Dϕ k { }
which can be used to obtain an estimate of the single-difference carrier-phase esti-
mate Dj k=j k,m–j k,n. The data bit d has cancelled.
The stochastic part evaluates to
k
DPnoise = CPk,,m
noise d
k
2(C / N0 )k,n BTcoh exp iϕ k,n { }
(7.56)
+ CPk,,nnoise d k 2(C / N0 )k,m
BTcoh exp {iϕ } + C
k,m k,m k,n
P,noise CP,noise
2
2(C / N 0 )k ,n B C Pk,,noise
m
Tcoh 2
=
2 2 2
+2(C / N 0 )k ,m B C Pk,,noise
n
Tcoh 2 + C Pk,,noise
m
C Pk,,noise
n (7.58)
(
= 4B2Tcoh 2 (C / N 0 )k ,mTcoh + (C / N 0 )k ,n Tcoh + 1 )
and is a function of the two C/N0 values. Furthermore, the variance depends non-
linearly on the coherent integration time, which reflects the fact that a nonlinear
operation has been used.
7.4 Difference Correlators 199
and in case both SNR values are equal and much larger than one:
SNR
SNRD = (7.60)
SNR k ,m
= SNRk ,n
= SNR�1 2
In the latter case, single-differencing reduces the SNR value by 3 dB. An equiva-
lent 3-dB loss would also occur if single-differencing were done at the carrier-phase
level.
(
DP = 2BTcoh 2 (C /N 0 )k ,m (C /N 0 )k ,n exp i Dϕ k + DPnoise
k
{ } ) (7.62)
(2BT coh
2
{ }
(C /N 0 )l ,m (C /N 0 )l ,n exp i Dϕ l + DPnoise
l
)
The deterministic part of the double-difference correlator
DPnoise = 0 (7.65)
because the single-difference correlator to two different transmitters is assumed to
be uncorrelated. The variance of the double-difference correlator is given as
202 Correlators
and is split into a deterministic-signal part Xsig and a stochastic-noise part Xnoise.
The integration (summation) involves a number of ncoh samples corresponding to a
coherent integration time of Tcoh = L/fs.
In the following example, all expected values are understood with respect to the
reference-signal noise and the received noise:
… … N,M (7.71)
and the noise contribution. The expected value of the latter contribution vanishes
because both noise processes are uncorrelated and unbiased,
L
Xnoise (rµ , N µ , Mµ) = (rrec, µNµ + rµMµ + Nµ Mµ) =0 (7.73)
µ=1
The deterministic contribution relates to the parameter difference of the code phases
t – t0, Doppler difference w – w0, and carrier-phase difference j – j0.
Thus, for complex-valued input samples, the expected value is
a
X (Re{S µ }) Tcoh fs Rc ,c rec (τ - τ 0 )exp{i (ϕ - ϕ0 )}κ (ω 0 - ω) (7.75)
2
7.5.2 Covariance
To evaluate the covariance, assume two different correlators, which are distin-
guished by an index “a” and an index “b.” Both correlators are associated with
different parameters, namely
7.5 Noisy Reference Signal for Codeless Tracking 203
X a (S µ ) � c rec , τ 0a ,ω 0a , ϕ0a , M µ
(7.76)
X b (S µ ) � c rec , τ 0b , ω 0b , ϕ 0b , M µ
Both work with the same received-signal samples. The underlying reference
signal crec is identical, as is the noise process Mm.
Furthermore, we only consider code-phase shifts as a multiple of the sampling
interval:
a b
τ 0a = , τ 0b = (7.77)
fs fs
cov X a (S µ ), X b (S µ ) = X a (r µ = 0, N µ , M µ )X b( r µ = 0, N µ , M µ ) (7.78)
because the signal contribution cancels and the expected value of the received-noise
contribution vanishes. Thus,
( ( )
L
=
µ ,υ =1
) {(
N µ crec t µ - τ 0a exp i ω 0a t µ - ϕ 0a )}+ M µ -a
( ( ) {(
Nυ crec t υ - τ 0b exp i ω 0b t υ - ϕ 0b )} + M υ -b)
L
=2
µ =1
( ) ( ) { ( ) (
crec t µ - τ 0a c rec tµ - τ 0b exp it µ ω 0b - ω 0a - i ϕ0b - ϕ 0a )}
L
+ N µ Mµ - a Nυ Mυ -b
µ , υ=1
(7.79)
L
=2
µ =1
( ) ( ) { ( ) (
crec t µ - τ 0a crec t µ - τ 0b exp it µ ω 0b - ω 0a - i ϕ0b - ϕ 0a )}
L
+
µ , υ=1
N µ Nυ Mµ - a Mυ -b (
2 fsTcohRcrec ,crec τ 0a - τ0b )
{(
exp i ϕ0a - ϕ0b )} κ (ω b
0 )
- ω 0a + 4δa,b fsTcoh
because (1.16) and (1.28) hold and we assume that the shifts a, b are much smaller
than L. The covariance for the complex-conjugated input signal is given similarly
as
( )
cov X a S µ , X b S µ ( ) = cov X a(Sµ ), X b(S µ) (7.80)
204 Correlators
( ( )
L
cov X a (S µ ), X b (Sµ ) =
µ , υ=1
) {(
Nµ crec tµ - τ 0a exp i ω 0at µ - ϕ 0a )} + M µ -a
( ( ) {(
Nυ crec t υ - τ 0b exp i ω 0bt υ - ϕ0b )} + M υ -b )
L (7.81)
( ) (
= 2 crec tµ - τ 0a crec tµ - τ 0b exp it µ ω 0b - ω 0a - i ϕ 0b - ϕ0a ) { ( ) ( )}
µ =1
L
N µ Nυ + Nµ Mµ - a Nυ Mυ -b = 0
µ ,υ =1
Based on the derivation of (7.24), for real-valued input signals we obtain
cov X a (Re{S µ }), X b (Re{S µ }) =
(7.82)
fsTcohRcrec ,crec (
τ 0a - τ 0b ) exp{i ( ϕ 0a )} κ (
- ϕ0b ω 0b ) + 2δ
- ω 0a a,b fsTcoh
7.5.3 Variance
The variance of the correlator output is a special case of the covariance assuming
identical parameters for the “a” and “b” cases.
For complex-valued input signals
var X (S µ ) = 2fsTcoh Rc rec ,c rec (0) + 4fsTcoh (7.83)
holds, and for real-valued input signals
var X (Re{Sµ }) = fsTcohRcrec ,crec (0) + 2fsTcoh (7.84)
We assume that the deterministic part of the reference signal equals the trans-
mitted signal [being the same P(Y)-code signal on L1 and L2] multiplied by an
amplitude factor b:
crec (t) = bc(t) (7.85)
The reference signal is complex-valued and b relates to the C/N0 value on L1.
According to Appendix A.2.1, the relationship is
fs b 2
C /N 0,ref = (7.86)
2
The autocorrelation function of the reference signal becomes
Rcrec ,crec (τ ) = b2Rc ,c (τ ) (7.87)
The amplitude of the received signal relates to the C/N0 value on L2
fs a 2
C /N 0 = (7.88)
4
because a real-valued received signal is considered (see Appendix A.2.2).
The formula for the variance of a carrier-phase discriminator will be derived in
Section 8.1.3 and is given by (8.31). For cross-correlation, it reads as
C / N 0,ref + fs
= 2
2TcohC / N 0C / N 0,ref κ (ω 0 - ω)Rc ,c (τ - τ 0 )
C / N 0,ref + fs
1+ 2
÷
2TcohC / N 0C / N 0,ref κ (ω 0 - ω)Rc ,c (τ - τ 0 ) ÷
If the C/N0 value of the reference signal (e.g., on L1) is much larger than the
sample rate, the phase-discriminator variance equals the case of a noiseless refer-
ence signal:
1
var F(Re{Sµ }) = 2
C / N0, ref � fs 2TcohC / N0 κ (ω0 - ω)Rc ,c (τ - τ 0 )
(7.90)
1
1+ 2
÷
2TcohC / N0 κ (ω 0 - ω)Rc ,c (τ - τ 0 ) ÷
In the general case of an arbitrary C/N0 value of the reference signal, the phase-
discriminator noise is identical to the noiseless reference-signal case if the received
signal C/N0 value is replaced by an effective value C/N0,eff:
C /N 0,ref
C / N 0,eff = C / N 0 (7.91)
C /N 0,ref + fs
Thus, the cross-correlation loss is given by
C /N 0,ref
Lxcorr = (7.92)
C /N 0,ref + fs
and is a function of the reference-signal power and the sample rate fs.
Within this section, we will extend the obtained mean and variance formulas for the
case of colored noise. The approach is based on a signal transformation to reduce
the colored-noise case to the white-noise case. Within this approach, artificial sig-
nals are introduced and the reader should keep in mind that those signals are just
used for the theoretical analysis. They do not appear in the receiver.
The noise-power spectral density is related to the covariance function via the
Fourier-series equation
� (f ) = 2π if fs fs
Q(µ)exp - µ 2<f
Q (7.95)
fs 2
µ =-
7.6 Incorporating Colored Noise 207
must be fulfilled so that the C/N0 definition can be maintained. For the C/N0 defini-
tion in the case of colored noise, the noise-power density is interpreted as a mean-
noise power density.
The complex-valued signal model from Section 1.8 is used:
rµ = ac ( tµ - τ )exp{i(ω t µ - ϕ )} (7.98)
In the following examples, we analyze in detail the expression of a correlator
operating on the complex-conjugated samples. The discussion is analogous for a
correlator operating on the nonconjugated samples. According to Section 7.2, the
correlator is defined as
L
C (S µ ) = rrec, µ Sµ = S* × rrec (7.99)
µ =1
The expression is now rewritten, introducing artificial signals S¢, r¢rec, and N¢
( )
*
C (Sµ) = S* × rrec = S*Q-1 2 × Q1 2 rrec = Q-1 2S × Q1 2 rrec = S *× rrec (7.100)
which are defined as
S =r +N
S = Q -1 2 S , r = Q -1 2 r , N = Q -1 2 N (7.101)
rrec = Q1 2 rrec
by using the Hermitian colored-noise covariance matrix Q. Similar to the discus-
sion of spectral whitening in Section 6.4.1, the artificially introduced noise is white;
that is,
Nµ Nυ = 0, N µ N υ = 0, N µ N υ = 2δ µ,υ (7.102)
N N N
So far, the discussion has been carried out using finite-length sample vectors. It
is, however, more convenient to confine the discussion to the level of continuous-
time signals, thereby allowing simpler expressions when working in the frequency
domain.
Let Wx{…} be a filter operating on the continuous time signals defined in fre-
quency domain as
with x being a real number (acting as an exponent) and an arbitrary phase function
defined in the frequency domain with
� (f ) = 1
F (7.104)
The filter’s magnitude response is related to the colored-noise power spectral
density via
W � (f )
� -2 (f ) = Q (7.105)
The filter Wx{…} is called a (continuous-time) whitening filter.
Applying the whitening filter to the received signal and the inverse whitening
filter to the internally generated reference signal before sampling
S (t ) = W {S(t )}
(7.106)
c ref (t ) = W -1 {c ref (t )}
leaves the correlation value invariant and allows the application of the white-noise
theory. Only the correlation functions appearing in formulas on the stochastic cor-
relator properties need to be adjusted.
The correlation function between an internal reference signal and the received
signal remains unchanged because
In Section 7.6.2, the technique will be illustrated for the example of the code-
discriminator noise for an early–late tracker of a signal of finite bandwidth.
because outside the passband, the received signal has only vanishing frequency
components. The frequency-domain representation of the correlation function is
the product of the frequency transform of both signals.
The correlation function between two internally generated signals changes ac-
cording to (7.108) into
B
2
Rca -2 (τ ) = � f +B R
Q � a b (f )e 2π if τ df
rec ,W
b
{crec } 2 crec ,crec
f =- 3B
2
(7.113)
B
2
= � (f + f0 )c�rec
Q a b
(- f )c�rec (f )e 2π if τ df
f =- 3B
2
The whitening filter is shifted by f0, the center frequency, where the correlation
takes place.
Ignoring for the moment the squaring loss (which is sufficient to demonstrate
the white-noise transformation), the variance of the code discriminator, working on
real-values samples with white noise, obtained from (8.14), is
2
1 RcD ,cD (0)
var D (Re{Sµ }) = (7.114)
Rc ,cD (0) 2C / N0Tcoh
Applying the white-noise transformation, the code-discriminator variance
changes to
2
1 Rc D ,Q{c D } (0)
var D(Re{S µ }) = (7.115)
Rc ,c D (0) 2C / N 0Tcoh
Note that Q{…} = W–2{…}.
The early–late reference signal is
cD (t) = cP t - d 2 - cP t + d 2 (7.116)
B (7.117)
=
2
�c ,c (f ) e 2π if (τ -
R
d
2 ) - e2πif (τ + d 2 )
P P ÷ df
f =- B 2
7.6 Incorporating Colored Noise 211
B
2
=4 � (f + f0 )R
Q �c ,c (f )sin2 (π fd)df
P P
f =- 3B 2
1 f =- 3B 2
var D(Re{S µ }) = 2
(7.120)
2C / N 0Tcoh B
2
2π �c ,c (f )sin(π fd)df
fR P P
f =- B
2
Note that,
�c ,c (f ) = c�P (- f )c�P (f ) = c�P (f ) 2
R (7.121)
P P
1 f =- B 2
var D(S µ ) = 2
(7.122)
2C / N 0Tcoh B
2
2π �c ,c (f )sin(π fd)df
fR P P
f =- B
2
Equations of type (7.122) have also been derived in the work by Betz and
Kolodziejski using a different methodology [8].
7.7 Comparison of Finite and Infinite Sample Rates 213
sampled without aliasing, but the spectra of the internally generated infinite band-
width replica shows aliasing effects.
After sampling, the noise-power spectral density is flat and the white-noise the-
ory can be applied. Ignoring the squaring loss, the code-discriminator variance from
(8.14) is expressed in the frequency domain as
2
1 RcD ,cD (0)
var D(Sµ ) =
Rc ,cD (0) 2C / N0Tcoh
�c ,c (f )sin2 (π fd)df
R (7.124)
P P
1 f =-
= 2
2C / N0Tcoh B
2
2π � c ,c (f )sin(π fd)df
fR P P
f =- B
2
This expression compares to (7.122) and differs only in the integration bound-
aries of the nominator. In fact, in (7.124), the aliased early-minus-late reference-
signal frequency components correlate with each other, whereas they do not
correlate in the infinite sample-rate case (7.122).
To visualize the difference between the infinite and finite sample-rate case a
BPSK power spectral density is assumed
2
�c ,c (f ) = Tc sin (π fTc )
R (7.125)
(π fTc )2
P P
where Tc is the chip period in seconds. Table 7.1 summarizes the parameter used for
the evaluation of the formulas. It should be noted that (7.123) and (7.124) give the
code variance in seconds-squared and the discriminator spacing is in seconds. For
better visualization in Figure 7.7, the variance (actually the standard deviation) is
plotted in meters and the discriminator spacing is in chips.
The integrals occurring in (7.123) and (7.124) can be solved analytically using
a symbolic mathematics software package. The resulting expressions are not given
here. From Figure 7.7, one clearly sees the divergence of the code-noise variance, if
d 0 and if a finite sample rate is used.
Thus, an early–late tracker does not approach the CRLB if d 0 in a real-
receiver implementation, which necessarily works with a finite sample rate. The
Discriminators
The different correlator types of Chapter 7 are the basis for estimating signal-
parameter values. The correlator values are combined to form the code-phase, Dopp
ler, and carrier-phase discriminator values. Nonlinear operations are required to
remove the carrier-phase dependency of the correlators and to obtain noncoherent
discriminators. The reference functions can be optimized to achieve a desired track-
ing performance in terms of multipath mitigation and thermal noise. This process
is called S-curve shaping. Multiple correlator values can be used to simultaneously
estimate direct and reflected signal parameters. Finally, this chapter outlines how to
compute positioning accuracy from discriminator noise.
… = …N (8.1)
CD ¬ cD (t)
(8.2)
CP ¬ cP (t)
The reference signals fulfill the following requirements within the code phase lin-
earity limit l in order to allow the construction of a linear code-phase discriminator,
217
218 Discriminators
Rc ,cP (0) 0
Because of the first condition, both correlators are uncorrelated if they are
based on the same code-phase delay. Because the correlators use a large number
of signal samples, the central-limit theorem applies and the correlator values are
Gaussian random variables. Uncorrelated Gaussian random variables are indepen-
dent of each other.
The code-phase discriminator is constructed for a real-valued incoming signal as
CD (Re{Sµ }) CD (Re{Sµ })
D = Re α = Re α
CP (Re{Sµ }) CP (Re{Sµ }) - CP (Re{S µ }) + CP (Re{S µ})
-1
CD (Re{Sµ }) CP (Re{S µ }) - CP (Re{S µ })
= Re α 1- ÷
CP (Re{Sµ }) CP (Re{S µ }) ÷
because the D- and P-correlators are independent random variables. For high P-cor-
relator SNR,
8.1 Noncoherent Discriminators 219
Rc ,cD (τ - τ 0 )
D Re α (8.6)
Rc ,cP (τ - τ 0 )
! Rc ,cD (0)(τ - τ 0 )
1= D τ =τ0 Re α ÷ τ =τ 0
τ τ Rc ,cP (0) + Rc ,cP (0)(τ - τ0 ) ÷
(8.7)
Rc ,cD (0)
= Re α
Rc ,cP (0)
A normalization constant of
Typically, e = 0.
For low P-correlator SNR values, the normalization constant needs to be ad-
justed to maintain a unity slope of the discriminator at the origin. This is ignored in
many GNSS-receiver implementations.
For t = t0, the variance of the complex-valued code-phase discriminator is
obtained as
CD (Re{Sµ }) -2
var = var CD (Re{Sµ }) CP (Re{S µ }) (8.10)
CP (Re{Sµ })
because both correlators are independent and because the expected value of CD
vanishes. The inverse of the squared absolute value of the complex Gaussian ran-
dom variable is approximated based on the assumption that its variations are much
smaller than the mean value:
-2
-2 -2 -2 x- x
x = x- x + x = x +1
x
2
-1 2 -1
-2 x- x -2 x- x ÷
= x 1+ ÷ = x 1+ ÷
x ÷ x ÷
220 Discriminators
2
-2 x- x -4 2
x 1- = x x -x+ x
x
-4 2 2
= x x - 2 x Re x - x + x - x ÷
-2 -4
= x + x var x (8.11)
Thus, the code-discriminator variance is obtained by using (7.17) and (7.26) with
fs = 2B and a = (2C / N0 / B)1/2 as
CD (Re{Sµ })
var
CP (Re{Sµ })
-2 -4
var CD (Re{Sµ }) CP (Re{Sµ }) + CP (Re{Sµ }) var CP (Re{Sµ })
-2
1 + κ 2C / N0BTcohRc ,cP (0) 2BTcohRcP ,cP (0)÷
with
κ = κ (ω 0 - ω) (8.13)
2
α CD (Re{Sµ })
var D(Re{Sµ }) = var
2 CP (Re{Sµ })
2 (8.14)
Rc ,cP (0) RcD ,cD (0) RcP ,cP (0)
= 2
1+ 2
÷
Rc ,cD (0) 2C / N0Tcoh κ Rc ,cP (0) C / N0Tcoh κ Rc ,cP (0) ÷
The variance of the code discriminator is mainly given by the ratio of the au-
tocorrelation function at the origin of the code-tracking reference function cD(t)
8.1 Noncoherent Discriminators 221
C F (Re{S µ })
F = Re β
C P (Re{S µ })
Assuming a high P-correlator SNR and using (7.17), (7.33), and (A.76) yields
ω0 - ω �
κ (ω 0 - ω) κ (0) + κ (0)(ω 0 - ω)
F Re -iβ Re -i β
κ (ω0 - ω) κ (0) + κ (0)(ω 0 - ω)
(8.20)
κ (0)(ω 0 - ω) 2 1
= Re -iβ Re iβ Tcoh χ freq (ω 0 - ω)
κ (0) 12
12i
β= 2
+ε ε R (8.21)
Tcoh χ freq
Typically, e = 0.
The normalization constant should be increased for low P-correlator SNR to
compensate for the second-order expected value decrease in (8.18). For high P-
correlator SNR, the expected value is
ω -ω0 �
F = ω - ω0 (8.22)
CF (Re{Sµ }) -2
var = var CF (Re{Sµ }) CP (Re{S µ }) (8.23)
CP (Re{Sµ })
CF (Re{Sµ })
var
CP (Re{Sµ })
-2 -4
= var CF (Re{Sµ }) CP (Re{S µ }) + CP (Re{S µ }) var CP (Re{S µ })
2
2BTcoh RcP ,cP (0)Tcoh χ freq
= 2 (8.24)
12 κ 2C / N0 BTcoh Rc ,cP (τ - τ 0 )
-2
1 + κ 2C / N0 BTcoh Rc ,cP (τ - τ 0 ) 2BTcoh RcP ,cP (0)÷
2
RcP ,cP (0)Tcoh χ freq RcP ,cP (0)
= 2
1+ 2
÷
12C / N0Tcoh κ Rc ,cP (τ - τ 0 ) C / N0Tcoh κ Rc ,cP (τ - τ 0 ) ÷
8.1 Noncoherent Discriminators 223
2
β CF (Re{Sµ })
var F(Re{Sµ }) = var
2 CP (Re{Sµ })
2
122 RcP ,cP (0)Tcoh χ freq RcP ,cP (0)
= 4 2 2
1+ 2
÷ (8.25)
2Tcoh χ freq 12C / N0Tcoh κ Rc ,cP (τ - τ 0 ) C / N0Tcoh κ Rc ,cP (τ - τ 0 ) ÷
This formula compares to the Doppler CRLB (4.84). The variance is larger than
the CRLB if the replica waveform does not match the received waveform or if Dop-
pler or code-phase mismatch occurs.
CP ∼ cP (t) (8.26)
The carrier phase discriminator shall either be based on the argument of the
complex-valued P-correlator (four-quadrant arctan) or on the two-quadrant arctan
function
F = angle{CP }
Im{CP } (8.28)
F = atan
Re{CP }
The first equation is used if the signal is not modulated with data-bit informa-
tion and the second one is used if the signal is modulated with a binary data-bit
stream using a BPSK modulation. Both discriminators are treated identically in the
following example.
Following the same derivation as in Section 4.3.2.8, the expected value of the
carrier-phase discriminator is
ϕ -ϕ 0 �
F = ϕ - ϕ0 (8.29)
for high P-correlator SNR. The magnitude of the expected value reduces at lower
SNR values. A Monte Carlo simulation of the expected value of the carrier phase
8.2 S-Curve Shaping 225
Figure 8.2 Slope of the four-quadrant carrier-phase discriminator for different SNR values.
8.1.4 Clipping
The formulas for the proposed code-phase and Doppler discriminators involve di-
vision by the P-correlator value. Division is a mathematically unsafe operation be-
cause a division by zero may occur. On the contrary, the correlation point should
lie within the linear region of both discriminators.
The discriminator values can therefore be bounded by the extension of the
linear code-phase or Doppler region (or more generally, by the maximum possible
code-phase and Doppler difference). The discriminator values are clipped to stay
within the mentioned thresholds. Clipping is not required for the carrier-phase dis-
criminator.
Clipping is a nontrivial assumption on the tracking-loop performance, which
is required to keep the correlation point in the linear region. If this requirement is
fulfilled, clipping is a useful constraint on the admissible code-phase and Doppler
range.
The formulas of the previous Section 8.1 allow computation of the code-phase,
carrier-phase, and frequency tracking performance for a given set of reference sig-
nals. The reference signals need to fulfill the conditions (8.3), (8.16), and (8.27),
but can otherwise be chosen arbitrarily. Most important, the waveform of the
signals can be chosen flexibly and, by optimizing the waveform, the tracking per-
formance can be improved.
226 Discriminators
The method of using a flexible reference signal is tailored for the software-re-
ceiver approach. First, the software-receiver algorithms of Chapter 9 work with a
16-bit sample resolution, allowing for the represention of sophisticated waveforms.
By contrast, a simple hardware receiver may work with 1- or 2-bit resolution to
represent the reference signals. Second, the generation of one P-correlator reference
signal and of one D-correlator reference signal is sufficient for tracking in most
applications. This corresponds to two complex-valued correlators per channel. By
contrast, some other receiver structures need three (early, prompt, late) or five (very
early, early, prompt, late, very late) complex-valued correlators per channel.
In the context of code tracking, the presented tracking scheme is also called a
code-continuous reference waveform (CCRW) tracking scheme and was first in-
troduced in an article by Weill [2]. The CCRW scheme falls into the class of non-
parametric estimators because no multipath signal parameters are estimated, just
line-of-sight signal parameters [3]. The scheme allows realization of most linear
code discriminators, including the early–late and double-delta discriminator, being
well known for optimum code performance with the GPS C/A-signal. Those corre-
lator types can also be applied for modernized GNSS signals using the BOC, CBOC,
or AltBOC modulation scheme as has been pointed out in Irsigler and Eissfeller’s
work [4]. In Section 7.6.2, the early–late discriminator was investigated for arbi-
trary signal waveforms and colored noise. As another example, the double-delta
code-tracking reference waveform is given by
( ) ( )
cD (t) = c p t - d 2 - c p t + d 2 -
1
2
(c p (t - d) - c p (t + d)) (8.32)
Furthermore, it is typically required that the code discriminator shows only one
stable tracking point (i.e., a zero crossing with positive slope) within the range of
admissible code-tracking errors.
228 Discriminators
Within the linear region, multipath errors are independent on the used correla-
tion scheme and no multipath mitigation is possible. The size of the linear region
needs therefore to be limited to achieve lower multipath errors. On the other hand,
a large linear region results in a high tracking stability. A trade-off between the two
performance figures has to be done depending on the specific target application.
�c ,c (f ) = c (- f )cD (f )
R (8.35)
D
In the following discussion, a method will be presented that shows how the
reference signal can be computed to achieve a desired correlation function Rc ,cD.
The desired correlation function must be known beforehand. The method will
be illustrated for the D-correlator, but can identically be applied also for the P-
correlator, which is used to remove the carrier-phase dependency from the code
discriminator.
The presented method relies on inverting (8.35) by dividing the cross-correla-
tion Fourier transform by the Fourier transform of the incoming signal. The division
can only be performed for nonvanishing signal frequency components. Frequency
components with a vanishing (or very small) signal magnitude must not be present
in the S-curve. In other words, the S-curve and received signal need to be spectrally
compatible in the sense that the spectral content of the signal must be sufficient to
represent the S-curve.
The D-correlator reference signal to obtain the S-curve Rc ,cD is given by
�c ,c (f )
R D
c (- f ) > γ
cD (f ) = c (- f ) (8.36)
0 c (- f ) γ
8.2.4 Discussion
Spectral S-curve shaping is a convenient method to calculate a reference signal to achieve
a given correlation function. It gives a stable tracking scheme (e.g., no BOC ambiguity)
with defined multipath characteristics. One disadvantage of spectral S-curve shaping is
that the S-curve needs to be specified for all code-phase values (not only for the region
of interest), which eventually increases the thermal-noise variance.
In another work, a different S-curve shaping method is described that illustrates
how to compute the code-tracking reference function to obtain a given S-curve [7].
The method relies on fitting a set of shifted autocorrelation functions to the target
S-curve. The fitting is applied only in the region of interest. This fitting problem can
be well-solved for high-bandwidth signals. For low-bandwidth signals, the solution
becomes unstable and the desired S-curve cannot be reproduced. For high-band-
width signals, near-optimum multipath mitigation can be achieved. For BOC sig-
nals, the pull-in range can be increased to avoid unstable tracking points. For BPSK
signals, this method reproduces the double-delta correlator, which is known to have
optimum multipath performance. This method has also been applied in a work by
Paonni for the MBOC-modulation family [8].
Neither spectral shaping methods (spectral and fitting) take into account the
thermal-noise performance. For example, the reference function plotted in Fig
ure 8.5 shows almost no relationship to the PRN-code sequence and worse thermal-
noise performance is expected. Generally, the parameters of the target S-curve (e.g.,
the size of the linear region) need to be chosen empirically to achieve good ther-
mal-noise performance. Development of a modified S-curve shaping scheme that
also includes thermal-noise performance is currently ongoing. The resulting code
discriminator depends on the weights put on the thermal-noise errors and the multi
path errors to assess the total performance.
The method intrinsically provides multipath mitigation for the code phase,
Doppler, and carrier phase simultaneously.
The LSQ procedure provides accuracy estimates for the discriminator values,
which are based on the actual signal amplitude.
beginning of the interval, the Doppler is constant throughout the interval, and the
carrier-phase value jm is defined in the middle of the integration interval, corre-
sponding to a signal model
M M
Tcoh (8.40)
rµ = am c(t µ - τ m )exp{iω m t µ } = am c(t µ - τ m )exp iω m t µ - ÷
m =1 m =1
2
(8.41)
C = (C1 � C B )T
The correlator b uses as reference signal crecb, and the code phase at the begin-
ning of the correlation interval is t0b. The angular Doppler frequency w0b is constant
during the integration interval and the carrier phase j0b vanishes at the midpoint of
the integration interval (note that the carrier-phase estimate is derived from am):
C b (Sµ) � crec
b
, τ 0b, ω 0b, (ϕ0b = 0) (8.42)
Using the results of Section 7.2, a functional and stochastic model of the cor-
relator values is defined that serves as the basis for the following discussion.
The LSQ discriminator is given by a weighted fit of this model to the correlator
values, thereby minimizing
The correlator model Cq is calculated by (7.17) and the covariance between two
correlator values Qv is calculated by (7.24).
The LSQ equation (8.43) is linearized around the linearization point q0,
and the correlator values are modeled for the linearization point as
( )
T
Cq0 = C1(Re{Sµ }; q0 ) � C B (Re{S µ }; q0 ) (8.45)
The differences of the true signal parameters with respect to the linearization
point are
Dq = q - q0 (8.46)
DC = C - Cq0 (8.47)
234 Discriminators
For clarification, it should be noted that the LSQ discriminator involves two
parameter sets called “points”: the correlation point and the linearization point.
The correlation point is given by (8.42) and represents the NCO values used to
compute the correlation values. The correlation point is kept constant throughout
the LSQ procedure and the time-consuming correlation process needs not to be
repeated. This is in contrast to the discussion of Section 4.3 or the MLE approach
of Won [12]. On the other hand, the linearization point is used to linearize the cor-
relator model (7.17). The linearization point may vary during the iterations of the
LSQ procedure. The length of the vectors representing the correlation point (equal
to 2B) and the linearization point (equal to 3M) are different. The linearization
point should be near the true values to ensure that the linearization is a reasonable
approximation. Furthermore, at least one correlation point (t0b, w0b) should lie near
the true parameter values.
The linearized LSQ equation is given as
χ * -1 (8.48)
2 = (DC - ADq) Qv (DC - ADq) min
and the value of c is used to verify that the assumed model (i.e., the number of
multipath parameters) is correct.
Using the amplitude definition (8.39) the bth correlator value is modeled from
(7.17) as
C b (Re{Sµ }; q0 ) =
b
M b (τ m,0 - τ 0 )
am,0 exp{iϕ m,0 }Tcoh fs Rc ,crec
= κ (ω0b - ω m,0 )
m =1
2 (8.49)
b
b (τ m,0 - τ 0 )
M am,0Tcoh fs Rc ,crec
= κ (ω 0b - ω m,0 )
m =1
2
8.3.2 Calibration
A LSQ discriminator relies on an accurate model of the correlation values. In a real
receiver implementation, this model has to be derived from real measurements in
order to correctly account for the front-end filter characteristics. For example in van
Nee’s work, correlation values of a GPS receiver are averaged to get a mean correla-
tion function [10]. Errors in the correlation model will degrade the LSQ-discriminator
performance in terms of introduced biases and a less accurate fit. Furthermore, the
correlator residuals might be mistaken as multipath signals. Calibrated correlator
models are also necessary for signal-quality monitoring and correlator calibration
can be found in Irsigler’s thesis [13].
Especially for the case M = 2, the LSQ algorithm may diverge. This is the case
if no multipath signal is present and the c2-test incorrectly decides for the M = 2
hypothesis. Then, as a fallback, the M = 1 parameters are used as an estimation
result. The correlation point must be chosen such that, for M = 1, the LSQ algo-
rithm always converges. A fallback to the M = 1 case also occurs if the estimated
parameter variances for the M = 2 case exceed fixed-threshold values. This is done
to avoid cases of a nearly singular normal matrix that occurs for (nearly) identical
line-of-sight and multipath code-phase values.
It is proposed to use a complex LSQ algorithm to reduce the dimensions of
the involved matrices by a factor of two. The complex LSQ algorithm implies that
complex-valued code-phase and Doppler improvements are also estimated. The
imaginary parts of these improvements are discarded when the linearization point
is updated.
The proposed algorithm stops at M = 2 because, for higher M values, the al-
gorithm tends to be unstable and the occurrence of more than one specular reflec-
tion is unlikely. For the M = 2 case, the algorithm identifies the line-of-sight signal
parameters as the set of parameters having the smaller code-phase delay. Formally,
the scheme can be extended straightforwardly for M > 2.
sating for the line-of-sight signal effect). The position is chosen during the M = 1
step after the observed-minus-computed correlator values are computed.
Initial values for the complex-signal amplitude are obtained by performing one
LSQ step. After this step, only the complex amplitude is updated and the code-
phase and Doppler values are fixed to their initial values. After the first step, the
code-phase and Doppler values also are updated. This initialization step ensures
that the division by the complex amplitude to remove the carrier-phase dependency
is done with a good signal-amplitude estimate.
8.3.8 Discussion
The presented multipath-estimating discriminator is an adaptive method to mitigate
multipath errors and has an optimal thermal-noise performance if no multipath
signal is present or if the multipath signal is easily detectable. If the multipath sig-
nal remains undetected, the performance is still comparable to a narrow early–late
discriminator. The method seems not to have any disadvantage in terms of perfor-
mance compared to a nonparametric discriminator. The major drawbacks are the
increased computational demands and the calibration efforts.
The method demonstrates also that the nonrandom parameter approach dis-
cussed in Chapter 4 does not provide an optimal estimator for arbitrary signal
power levels if the linearity conditions are not fulfilled. The MLE or LSQ scheme
is optimal only in the limit of infinite signal power. A typical LSQ problem is that
the low-power multipath remains undetected. The nonoptimality also manifests
itself as unstable LSQ iterations, especially if multipath-signal parameters are
estimated. Many local minima of the cost function (8.43) exist. Although there
exist methods to minimize this multidimensional function that reliably converge
to one of these local minima [11], the estimation principle itself remains subop-
timal.
As discussed in Section 4.8, Bayesian techniques are, by definition, also optimal
for low-signal-power values, but rely on correct stochastic-parameter (especially
multipath) models. In the author’s opinion, this information is difficult to provide
in practical situations, especially for multipath-signal parameters. The receiver
must be aware of its surroundings (e.g., rural, indoor, urban). A universal mul-
tipath model would be highly desirable. The Bayesian approach also needs more
computational resources when, for example, a particle filter is used. However, re-
sults presented by others demonstrate that the Bayesian approach may outperform
an epoch-per-epoch LSQ estimation because it exploits the relationship between
multipath-signal-parameter probability density functions at different epochs [14].
The LSQ discriminator and Baysian techniques rely on an accurate correlator
model that might be difficult to obtain in a real implementation because it needs the
receiver to be calibrated for given filters, amplifiers, and antennas.
The canonical method accounts only for thermal noise and ignores other rel-
evant error sources (e.g., unmodeled multipath, transmitter-position errors, atmo-
spheric delays, and so forth). Loop filters are characterized by their bandwidth BL
(see Section 4.3.3.1).
The canonical tracking-loop concept was introduced in the work by Betz and
Kolodziejski [15]. The method was introduced to convert unsmoothed TOA esti-
mates (i.e., high-rate pseudorange estimates) into smoothed TOA estimates (i.e.,
low-rate pseudorange estimates). The conversion from low-rate pseudorange esti-
mates to positions depends on the transmitter geometry and the geometric place-
ment of the transmitters is modeled by a dilution-of-precision factor (DOP). All
estimates are considered to be unbiased.
Following Section 4.3.3.1, let q denote a high-rate pseudorange parameter that
is available with a rate in seconds given by Tcoh. The variance of the corresponding
low-rate pseudorange parameter p is given by
where BL denotes the noise-equivalent loop bandwidth in hertz. This relation holds if
0 < BLTcoh < 1; refer to the work by Kazemi for a recent discussion on tracking-loop
stability [16]. Furthermore, the tracking loop is required to model the dynamics suf-
ficiently well so that transient errors can be ignored.
We assume now that, for one-epoch multiple identical-variance, indepen
dent and unbiased low-rate pseudorange estimates of the same type for different
transmitters are available. An LSQ estimator is used to calculate a (generalized)
position-parameter estimate (e.g., Doppler estimates might be used to obtain one
velocity-component estimate or code-phase estimates to obtain a coordinate esti-
mate). The variance of the (generalized) position estimate is given by
where the DOP factor accounts for the geometric placement of the transmitters.
DOP factors are described in many text books of satellite navigation [17]. The DOP
factor is the diagonal element of the parameters’ covariance matrix obtained under
the assumption of unity low-rate pseudorange variance.
References
[1] van Dierendonck, A. J., “GPS Receivers,” in Global Positioning System: Theory and Ap-
plications, Vol. I, pp. 329–407, Parkinson, B. W., and J. J. Spilker, (eds.), Washington, D.C.:
American Institute of Aeronautics and Astronautics Inc., 1996.
[2] Weill, L. R., “GPS Multipath Mitigation by Means of Correlator Reference Waveform De-
sign,” Proc. Institute of Navigation National Technical Meeting (ION-NTM) 1997, Santa
Monica, CA, September 14–16, 1997, pp. 197–206.
[3] Kaplan, E. D., and C. J. Hegarty, (eds.), Understanding GPS: Principles and Applications,
2nd ed., Norwood, MA: Artech House, 2006.
240 Discriminators
[4] Irsigler, M., and B. Eissfeller, “Comparison of Multipath Mitigation Techniques with Con-
sideration of Future Signal Structures,” Proc. 16th Int. Technical Meeting of the Satellite
Division of the Institute of Navigation (ION-GPS/GNSS) 2003, Portland, OR, September
9–12, 2003, pp. 2585–2592.
[5] Julien, O., et al., “A New Unambiguous BOC(n,n) Signal Tracking Technique,” Proc. Eu-
ropean Navigation Conference (ENC-GNSS) 2004, Rotterdam, May 16–19, 2004.
[6] Pany, T., and B. Eissfeller, “Code and Phase Tracking of Generic PRN Signals with Sub-
Nyquist Sample Rates,” NAVIGATION, Journal of The Institute of Navigation, Vol. 51,
No. 2, 2004, pp. 143–159.
[7] Pany, T., M. Irsigler, and B. Eissfeller, “S-Curve Shaping: A New Method for Optimum
Discriminator Based Code Multipath Mitigation,” Proc. 18th Int. Technical Meeting of
the Satellite Division of the Institute of Navigation (ION-GNSS) 2005, Long Beach, CA,
September 13–16, 2005, pp. 2139–2154.
[8] Paonni, M., et al., “Looking for an Optimum S-Curve Shaping of the Different MBOC
Implementations,” NAVIGATION, Journal of The Institute of Navigation, Vol. 55, No. 4,
2008, pp. 255–266.
[9] Ávila Rodríguez, J. Á., T. Pany, and G. Hein, “Bounds on Signal Performance Regarding
Multipath-Estimating Discriminators,” Proc. 19th Int. Technical Meeting of the Satellite
Division of the Institute of Navigation (ION-GNSS) 2006, Fort Worth, TX, September
26–29, 2006, pp. 1710–1722.
[10] van Nee, R. D. J., et al., “The Multipath Estimating Delay Lock Loop: Approaching Theo-
retical Accuracy Limits,” Proc. IEEE Position Location and Navigation Symposium, Las
Vegas, NV, April 11–15, 1994, pp. 246–251.
[11] Nunes, F. D., F. M. G. Sousa, and J. M. N. Leitao, “BOC/MBOC Multicorrelator Receiver
with Least-Squares Multipath Mitigation Technique,” Proc. 21st Int. Technical Meeting
of the Satellite Division of the Institute of Navigation (ION-GNSS) 2008, Savannah, GA,
September 16–19, 2008, pp. 652–662.
[12] Won, J. H., T. Pany, and B. Eissfeller, “Implementation, Verification and Test Results of a
MLE-Based F-Correlator Method for Multi-Frequency GNSS Signal Tracking,” Proc. 20th
Int. Technical Meeting of the Satellite Division of the Institute of Navigation (ION-GNSS)
2007, Fort Worth, TX, September 25–28, 2007, pp. 2237–2249.
[13] Irsigler, M., Multipath Propagation, Mitigation and Monitoring in the Light of Galileo and
Modernized GPS. University of Federal Armed Forces Munich, Werner-Heisenberg-Weg
39, D-85577 Neubiberg, http://www.unibw.de/unibib/digibib/ediss/bauv, 2008.
[14] Lentmaier, M., et al., “Dynamic Multipath Estimation by Sequential Monte Carlo Meth-
ods,” Proc. 20th Int. Technical Meeting of the Satellite Division of the Institute of Naviga-
tion (ION-GNSS) 2007, Fort Worth, TX, September 25–28, 2007, pp. 1712–1721.
[15] Betz, J. W., and K. R. Kolodziejski, “Extended Theory of Early-Late Code Tracking for a
Bandlimited GPS Receiver,” NAVIGATION, Journal of The Institute of Navigation, Vol.
47, No. 3, 2000, pp. 211-226.
[16] Kazemi, P. L., “Optimum Digital Filters for GNSS Tracking Loops,” Proc. 21st Int. Tech-
nical Meeting of the Satellite Division of the Institute of Navigation (ION-GNSS) 2008,
Savannah, GA, September 16–19, 2008, pp. 2304–2313.
[17] Hofmann-Wellenhof, B., H. Lichtenegger, and E. Wasle, GNSS: Global Navigation Satellite
Systems: GPS, GLONASS, Galileo & More, Vienna: Springer, 2008.
CHAPTER 9
Within a GNSS SDR, there are a few operations that consume most of the process-
ing power. They shall be described and analyzed in the following chapter. A specific
characteristic of the core operations is that their implementation needs fixed-point
arithmetic to run efficiently. By contrast, calculation of the navigation solution or
tracking-loop update (just to name two examples) can be done with the full floating-
point power available to the processor. Furthermore, the core operations run within
a one-dimensional loop and are, from the algorithmic point of view, rather simple.
In some sense they represent a heritage from GNSS hardware receivers. An im-
portant exception in this context is the use of Fourier techniques in a GNSS SDR.
Fourier techniques are more complex from the algorithmic point of view and may
also be implemented with floating-point arithmetic.
It should also be mentioned that the choice of the algorithms is, in some sense,
specific to the SDR implementation developed at the University of Federal Armed
Forces (Munich/Germany), but approaches used by other research institutes are
also discussed.
241
242 Receiver Core Operations
Bits provided by the ADC or maximum bandwidth of the ADC to CPU data
link;
Integer formats supported by the CPU’s vector instructions.
The first factor usually limits the number of bits to less than four (see Table
2.1). Only in special cases (e.g., if a commercial ADC board is used) may the SDR
be forced to work with a larger number of input bits (e.g., 8, 14, or 16 bits are
common COTS ADC board formats that would allow it to cope with high-power
interference). The bit format provided by the ADC card may not use a two’s-
complement format, which is the CPU internal format. For example, sign/magni-
tude formats are commonly used ADC output formats.
The second factor is specific for the CPU and the algorithms used for correla-
tion. For example, CPUs supporting the SSE-SSE4 instructions allow efficient mul-
tiplication of two 16-bit numbers 8-wise in parallel. Another method for signal
multiplication works most efficiently if both signals are represented as 1-bit values.
Then a simple XOR operation can be used to multiply two 1-bit numbers 128-wise
in parallel with SSE/MMX instructions.
244 Receiver Core Operations
Table 9.2 Assembly Language Code Snippet for Bit Size Conversion
Instruction Description Clock Ticks
loop_begin:
movzx esi, BYTE PTR Load next input 8-bit value into esi 0.98
[ebx+eax+length]
mov esi, DWORD PTR [ecx+esi*4] Perform lookup table operation. 7.81
The register esi is the index of the
table and it also receives the lookup
table values
mov DWORD PTR [edx], esi Store resulting 32-bit value 1.03
contained in esi in the
output stream
add edx, 4 Advance output stream by 32 bit 1.14
add ebx, 1 Advance input stream by 8 bit —
jne loop_begin go to beginning of loop, if ebx != 0 —
Total of clock ticks 10.96
Number of instructions 6
9.3 Resampling 245
9.3 Resampling
9.3.1 Algorithm
The algorithm investigated in the following section mimics a hardware-receiver-
like NCO structure. The NCO is represented as a 64-bit-long variable whose up-
per 32-bits act as an index to the precomputed signal table and whose lower 32-bit
values define the NCO phase and NCO-rate resolution. The algorithm (which is
depicted in Figure 9.2) is initialized with the start-NCO value and the NCO incre-
ment. The algorithm runs in a loop for the required number of samples and, for
each loop iteration, the lookup-table entry corresponding to the index given by
the upper 32 bits of the NCO value is stored in the output stream. Afterwards, the
NCO value is incremented by the NCO increment value and the next loop itera-
tion starts.
The algorithm generates a signal with a constant rate (e.g., constant Dop-
pler). If time-variable rate signals are needed, it is recommended to approximate
those signals by piecewise constant rate signals simply for numerical performance
reasons.
denote the sample rate of the signal to be generated in 1/sec. The values of the pre-
computed lookup table shall be denoted as “value” and this term may represent a
PRN-code sample or a sine/cosine value. It is common practice to store the refer-
ence signals “over sampled” [i.e., a single chip of a PRN code or a complete sine
period is usually represented by a fixed number of (multiple) values]. This shall be
denoted by the equation
where o denotes the oversampling factor. Here, “samples” stands either for “chip”
or for “cycles.”
The NCO-phase resolution is determined by o and by the number of bits in
the lower part of the NCO (in the above example, 32) but which will now be
denoted as m for the sake of generality. The NCO resolution in [samples] is given
by
Dφ = 1 m (9.2)
(2 o)
Because the NCO increment is an integer multiple of the NCO resolution, the
NCO rate resolution in samples per second is given by
fs
Dφ = (9.3)
o2m
resolution of 9.1 10–13 cycles, which is, for most applications, sufficiently high.
For an exemplary sample rate of 40.96 MHz, the NCO-rate resolution evaluates to
37 mHz. If a 16-bit NCO fine resolution had been used, then the NCO-rate resolu-
tion would evaluate in this case to 2.44 Hz, which could cause significant distor-
tions in GNSS frequency tracking.
For PRN-code generation, a typical example might be to store the PRN-code
sequence (or derived reference signals) with an oversampling factor of o = 20.
This is, for example, required to achieve a d = 0.1 early–late narrow correlator.
The corresponding 32-bit NCO code-phase resolution is then 1.2 10–11 chip. If a
sample rate of 40.96 MHz is again considered, the NCO-rate resolution evaluates
to 0.00048 chip/s. Assuming a C/A-code chip length of 293m, this corresponds to
a range-rate resolution of 14 cm/s.
9.4 Correlators
Correlators are fundamental for a GNSS SDR because they define a sufficient statis-
tic and all parameter estimates derive from them (see Section 4.4). Computationally,
they represent a dot-product operation written as
N -1
C= si ri (9.4)
i =0
The two vectors si and ri may, for example, represent the incoming-IF signal and
the internally generated reference signal.
9.4 Correlators 249
In cases where the second signal is read from main memory, the bottleneck is
again the memory-bus bandwidth. Table 9.1 reports a value of 2.5 GB/s, which ex-
plains the fact that the correlation is limited by 1,200 Msamples/s. For a correlation
speed of 1,200 Msamples/s, a number of 2.4 GB has to be transferred from the main
memory to the CPU as each sample of the second signal is 2 bytes long.
Fast Fourier transform techniques are mostly used in a GNSS SDR for signal ac-
quisition and exploit the convolution theorem. In fact, they are essential in a SDR
for acquisition, allowing the realization of a large number of effective correlators
needed for fast and sensitive acquisition. However, signal-preprocessing or postcor-
relation FFT tracking algorithms also benefit from an efficient FFT implementa-
tion. In contrast to the other core algorithms above, FFT implementations can be
complex, but because they are used for myriads of other applications, very efficient
libraries exist [2, 5]. Therefore, in this chapter no assembler language snippet is
presented and performance benchmarks are based on the library [2]. On the other
hand, the use of Fourier techniques for correlation can be quite tricky and optimiza-
tion at the algorithmic level will be shown in the following section.
9.5.1 Algorithm
The FFT is a widely known algorithm for efficiently evaluating a DFT. For the fol-
lowing example, we assume that the DFT is defined as
N -1
nk
�sk = sn exp -2π i �sk = FFT{sn } (9.5)
n=0
N
where sn is the time domain representation of a signal of length N and �sk is the
frequency-domain representation. The idea of the FFT became popular with the
publication by Cooley and Tukey, but had already been discovered by C.F. Gauß in
1805 [6]. It is based on the following identity
N / 2 -1 N / 2 -1
2mk (2m + 1)k
�sk = s2m exp -2π i + s2m +1 exp -2π i
m=0
N m=0
N
(9.6)
N / 2 -1 N / 2 -1
2mk k 2mk
= s2m exp -2π i + exp -2π i s2m +1 exp -2π i
m=0
N N m=0
N
The DFT of (9.5) is written as two DFTs whose individual length is half of the
original length. The first DFT is called even DFT (as it operates on elements with
even indices), the second one is called odd DFT. Both vectors are scaled and added
together to get the initially required DFT result (this is usually called a butterfly
operation). The process of splitting the large DFT into two smaller DFTs can be
recursively repeated, ending in an efficient FFT algorithm. The use of FFT tech-
niques is widespread and explained in many textbooks (e.g., [7]). A more detailed
explanation shall not be given here. It should be noted that although mixed-radix
DFT algorithms exist, the use of a radix-2 algorithm as presented above is still the
most common implementation. Consequently, the vector length N needs to be an
integer power of two.
It is a common practice to assume that the calculation of an FFT of size N needs
1 N -1 nk 1
sn = �sk exp 2π i sn = IFFT{�sk } (9.8)
N k=0 N N
N -1
nk
exp 2π i = N δk,0 (9.9)
n=0
N
sn + aN = sn a Z (9.10)
N -1 N -1 N -1
1 2π i
hm = sn rn + m = �sk �rl exp (nk + l(n + m))
n=0 n=0 N2 k,l = 0
N
N -1 N -1
1 2π i 2π i
= �sk �rl exp ml exp n(k + l)
N2 k,l = 0
N n=0
N
N -1
1 2π i (9.11)
= �sk �rl exp ml Nδ k, -l
N2 k,l = 0
N
1 N -1 2π i
= �sk �r-k exp - mk
N k=0 N
1 N -1 2π i
= �s-k �rk exp mk
N k=0 N
The DFTs of the signals s and r are also periodic in the sense of
be applied it allows reduction of the FFT size by a factor of two compared to zero
padding.
1 L-1 nk 1 L-1 nk
rn (p) = �rk (p)exp 2π i = �rk + p exp 2π i
L k=0 L L k=0 N
np
= exp -2π i rn
L
where L is the used IFFT size. The value of L depends on whether zero padding
or circular correlation is used. Thus, the spectrally shifted signal (9.19) in the fre-
quency domain corresponds to the signal multiplied with a complex sine/cosine
carrier in the time domain.
The method of spectral shifting can be applied to a zero-padding or a circular-
correlation correlator. Spectral shifting modifies both techniques, as shown by the
block diagram in Figure 9.6.
The obtained correlation function finally evaluates to
1
hm (p) = IFFT{FLIP{FFT{sn }}SHIFTp {FFT{rn }}}
L
(9.21)
N -1 N -1
(n + m)p
= sn rn + m ( p) = sn rn + m exp -2π i
n=0 n=0
L
1 L1 -1 � mκ
dm = dκ exp 2π i
L1 κ = 0 L1
(κ +1)L 2 -1
1 L1 -1 1 mκ
= h�k exp 2π i
L1 κ = 0 L 2 k =κL 2
L1
(9.24)
L1 -1 (κ +1)L 2 -1
1 mκ (k)
= h�k exp 2π i
L κ =0 k =κL 2
L1
1 L-1 � mκ (k)
= hk exp 2π i
L k=0 L1
k (9.25)
κ (k) =
L2
The symbol denotes the largest integer smaller than the containing number
(i.e., the floor operation). The inverse FFT can be further written as
1 L-1 � κ (k) k k
dm = hk exp 2π im - +
L k=0 L1 L L ÷
(9.26)
1 L-1 � block,m mk
= hk exp 2π i
L k=0 L
with
2π im k
h�kblock,m = h�k exp - - κ (k)÷
L1 L 2
(9.27)
2π im k
= h�k exp - fract
L1 L2 ÷
The function fract gives the fractional part (between zero and one) of a real
number. In the above equations, the factors m/L1 and fract(. . .) are both between
zero and one. The difference of the block-averaged spectrum to the original spec-
trum depends on m and is a slowly varying complex sinusoidal given as
2π im k
b�km = exp - fract (9.28)
L1 L2 ÷
260 Receiver Core Operations
“spectrum” has a sharp peak (the peak of the correlation function) that indicates
a band-limited “waveform” and it is possible to filter the waveform and reduce its
sample rate without significant information loss.
The advantage of this method is that the computational burden is reduced at
least by a factor of L2 and it may find its applications for assisted acquisition (if the
code phase search range can be limited) and for frequency-domain tracking (where
only correlation values around zero Doppler are needed).
M1 -1 M M1 -1
2π i M
= rn1 exp - n2 k2 M1 + n1k1 + n1k2 ÷
n1 = 0 n2 = 0
M M1
M M1 -1
nk nk (9.34)
= δ k2 ,0 rn1 exp -2πi 1 1 + 1 2 ÷
M1 n1 = 0
M1 M
M1 -1
M nk
= δ k2 ,0 rn1 exp -2πi 1 1 ÷
M1 n1 = 0
M1
9.5 Fast Fourier Transform 261
where (9.9) has been used to obtain the third line. As the whole expression of the
third line is multiplied with a Kronecker delta, the value of k2 can also be set to zero
inside the exponential function. Overall, the equation states that the Fourier trans-
form of a periodic signal has nonvanishing components only for a fine frequency
index k2 = 0. The spectral-shifted signal obtained by applying (9.19) to the periodic
signal r is written as
M1 -1
M n (k + p1)
�rk + p = �r(k1 + p1)M M1 + k2 + p2 = δ k2 + p2 ,0 rn1 exp -2π i 1 1 ÷ (9.35)
M1 n1 = 0
M1
1 M1 -1 M1 -1
n (k + p1) (m2 M1 + m1) M
= �s-k1 M M1 + p2 rn1 exp -2π i 1 1 - k1 - p2 ÷ ÷
M1 k1 = 0 n1 = 0
M1 M M1
1 M1 -1 M1 -1
n (k + p1) m1k1 p2 (m2 M1 + m1)
= �s-k1 M M1 + p2 rn1 exp -2π i 1 1 - + ÷
M1 k1 = 0 n1 = 0
M1 M1 M
(9.37)
The Fourier transform of the signal s (which is, in general, not periodic with a
periodicity of M1) obtained with the index convention of periodic signals is given
as
M1 -1 M M1 -1
2π i M
�s-k1 M M1 + p2 = sa2 M1 + a1 exp - (a2 M1 + a1) -k1 + p2 ÷
a1 = 0 a2 = 0
M M1
(9.38)
M1 -1 M M1 -1
a k (a M + a )p
= sa2 M1 + a1 exp -2π i - 1 1 + 2 1 1 2 ÷
a1 = 0 a2 = 0
M1 M
262 Receiver Core Operations
M M1 -1
(a2 M1 + a1)p2
t(p2 )a1 = sa2M1 + a1 exp -2π i ÷ (9.40)
a2 = 0
M
The matrix t(p2)a1 averages the (in general, nonperiodic) signal s over all peri-
ods. Samples corresponding to the same index a1 (but to different periods) are aver-
aged. The averaging is repeated for all possible values of the fine-frequency index
and the result is stored in a different row; for every row, different fine-frequency
compensation factors are applied before averaging. The expression (9.39) can be
further simplified as
1 M1 -1 M1 -1 M1 -1
= t(p2 )a1 rn1
M1 k1 = 0 a1 = 0 n1 = 0
1 M1 -1 � mk p (m M + m1)
= t (p2 )-k1 �rk1 + p1 exp -2π i - 1 1 + 2 2 1 ÷
M1 k1 = 0 M1 M
1 p (m M + m1)
= IFFT{FLIP{FFT{t(p2 )}}SHIFTp1 {FFT{rn1 }}}exp -2π i 2 2 1
M1 M
(9.41)
The matrix t(p2)a1 is independent of p1. After the FFTs of the matrix t(p2)a1 and of
one period of the signal r are computed, it suffices to calculate one IFFT of size M1
to search one Doppler bin. Recall that, within the circular-correlation approach of
Section 9.5.3.2, an IFFT of size M is required to search one Doppler bin.
A particular simple case occurs if a Doppler bin with a fine frequency of p2 =
0 is searched. In this case, the computation of the p2 = 0 row of the matrix t(p2)a1
simplifies to an averaging of samples.
1 M -1 u
= v�k hm (p - k)
M k=0
with
M -1
u (n + m)p
hm (p) = sn un + m exp -2π i (9.47)
n=0
M
being the correlation function of the signal s with the strictly periodic signal u. To
obtain hmu(p), the method of Section 9.5.6 can be used. The correlation function
including the secondary code hm(p) is the properly weighted sum of the correlation
functions obtained for the periodic signal hmu(p) at different frequency offsets. The
evaluation of (9.46) is numerically not efficient because it contains a sum over the
full spectral width (i.e., k ranges from 0 to M – 1). Further simplifications can be ob-
tained if we analyze v�k. Using the index notation of Section 9.5.6, it evaluates to
M1 -1 M M1 -1
2π i M
v�k = v�k1 M M1 + k2 = va2 M1 exp - (a2 M1 + a1) k1 + k2 ÷
a1 = 0 a2 = 0
M M1
M M1 -1 M1 -1
2π ia2k2 M1 2π ia1 M
= va2 M1 exp - exp - k1 + k2 ÷
a2 = 0
M a1 = 0
M M1
(9.48)
9.5 Fast Fourier Transform 265
1 M1 -1 2π ia1k
β(k) = exp -
M1 a1 = 0 M
(9.49)
1 M1 -1 2π ia1 M
β(k1 M M1 + k2 ) = exp - k1 + k2 ÷
M1 a1 = 0 M M1
and simplifying it to
2π iM1k
M1 -1 1 - exp -
1 2π ia1k 1 M (9.50)
β(k) = exp - =
M1 M M1 2π ik
a1 = 0 1 - exp -
M
yields
Here, v�k2 denotes the Fourier transform of the secondary code just accounting
for the index k2 (actually a short Fourier transform). Overall, the correlation func-
tion is then given as
M1 M -1 u M1 BM2 -1 u
hm (p) = v�k (k)β(k)hm (p - k) v�k (k)β(k)hm (p - k) (9.52)
M k=0 2 M k =- BM2 2
1 M1 -1 � 2π im1l1
wm1 (p) = wm1 (p1 M M1 + p2 ) = t (p2 )-l1 �rl1 + p1 exp (9.53)
M1 l1 = 0 M1
yields
1 B-1 M2 -1 *
hm (p) v�k β(k1 M M1 + k2 )
M k1 =- B k2 = 0 2
(9.54)
p (m M + m1)
wm1 (p - k1 M M1 - k2 )exp -2π i 2 2 1
M
and a block diagram of the resulting acquisition scheme is shown in Figure 9.11. It
generalizes the scheme of Figure 9.9 for arbitrary secondary codes.
9.5 Fast Fourier Transform 267
tion of full-length (i.e., M) FFTs. For a tiered code with a single period, a summa-
tion of the order of M2 terms is required, thus the processing gain is partly lost.
Nevertheless, the method allows for working with shorter FFTs, which is of special
importance if only a limited portion of the correlation function needs to be com-
puted (i.e., if the code-phase search range can be limited).
FFT sizes, and the Playstation behaves exactly the opposite. The reason lies in the
fact that, for cache-based systems (like the Pentium processors), the FFT perfor-
mance breaks down if the data does not fit into the cache, whereas for multicore
systems like the cell, the data-transfer overhead is simply too large for short FFTs.
The FFT characteristics of both systems are compared in Table 9.7. It should be
noted that the values reported therein should not be taken literally, especially be-
cause reported FFT-performance values are highly implementation-dependent and
because power consumption values have only indicative character as they are not
based on real measurements [13].
Two things should, however, be noted. First, the cell gives significantly more
GFLOPS per consumed electrical power, which may be explained by the fact that
this system is newer (see also Section 9.7). Second, the ratio between the reported
peak-performance values (which is, in both cases, derived from the performance of
the multiply-and-add command) and the achievable FFT performance is, in both
cases, around 0.2–0.3.
operations. Here, we assume that the first signal is of a fixed length and it is not
periodic. For each correlation value, the second signal is taken from a vector of size
M + N – 1, each time shifted by one sample. We assume that M N and that M +
N – 1 is an integer power of two.
A real-valued FFT needs half the operations as a complex valued FFT, thus (2),
(3), and (5) need all together
The last expression shows that for large N, the number of operations increases
with N log N, whereas in the time domain the increase is quadratic (assuming M
proportional to N).
In the following example, the break-even number of correlators Mbe shall be de-
termined, for which both the time domain and frequency domain require the same
amount of operations. It is obtained by solving the equation
where we assume that the number of correlators is much smaller than the number
of samples. For example, if a GPS C/A-code signal with a coherent integration time
of 1 ms and a sample rate of 20.48 MHz is considered, then frequency-domain
techniques become more efficient if more than 71 correlators need to be calculated,
which corresponds to a code-phase range of 3.5 chips.
The ratio between the time span represented by N samples divided by the CPU
time of the FFT implementation to complete the correlation defines the correlation
efficiency of an FFT-based correlator algorithm. The effective number of correla-
tors #Corr is defined as the product of correlation efficiency multiplied with N. The
effective number of correlators corresponds to the number of hardware correlators
that would give the exact same result within the same time. Note that hardware
correlators are understood to work intrinsically in real time (and not faster). The
definition of #Corr assumes that all available correlation values from the frequency
9.6 Reality Check for Signal Tracking 271
domain correlation are exploited (i.e., M = N is assumed for the most effective use
of the frequency-domain correlation). The effective number of correlators is thus
Here Tcoh denotes the length of the signal in seconds (usually the coherent inte-
gration time given as the number of samples N divided by the sample rate). TCPU is
the time of the CPU to perform the computation in seconds, fOPS is the number of
FFT operations the CPU can perform within one second. The longer Tcoh is, the more
efficient the FFT. For example, for a 20-ms signal, N = 215, and fOPS = 2.5 GOPS, the
effective number of correlators evaluates to 150,602. If Doppler preprocessing can
be applied (see Section 9.5.6), the number of effective correlators further increases
(e.g., by a factor of approximately 10–20 for the case of the GPS C/A-code).
CPU processing power is needed to run the receiver core algorithms discussed in
this chapter. In a typical high-end GPS C/A-code configuration, tracking all satel-
lites in view, the average CPU load is about 80%. The L1 USB front end consumes
less than 2.5W, yielding an (theoretical) overall receiver power consumption of
below 30W, ignoring the power consumption of hard disks, screens, and other
peripherals.
A real test showed that the run time of the system is around two hours (display
switched off), depending on the chosen sample rate as listed in Table 9.9. One
recognizes that reducing the sample rate by a factor of four increases the runtime
by 35%. This indicates that the receiver power consumption is less affected by the
actual processing done and that most of the power is needed to keep the system
alive (e.g., to power the hard disk, mainboard, and chip set). Furthermore, the CPU
cannot reduce its clock rate (to save power) even when the low sample rate is used
because it is constantly working. In the high-sample-rate mode, the system seems to
consume a power of 42W.
The runtime of the presented system is not particularly high. Although the sys-
tem is not optimized for GNSS receiver usage (e.g., the hard disk is always powered
on) it might be interesting to know when technology evolution can allow using a
laptop as a GNSS receiver replacement. To be more specific, we would like to es-
timate when this software allows 10 hrs of continuous operation without external
power having a performance of a geodetic GPS L1/L2 receiver.
The answer can only be found on a qualitative basis founded on Moore’s law.
It predicts that the number of gates doubles every 18 months. On the other hand,
it is observed that dissipated power by CPUs is approximately constant. Thus, it is
expected that for a given algorithm, the required power halves every 18 months. As
we need an increase in power of 526% (= 600 minutes / 114 minutes), we estimate
that this will be available in 3.6 years. Furthermore, increasing the number of fre-
quency bands from L1 to L2 requires doubling the processing power, which needs
1.5 additional years. Thus, we could estimate that in 2011, laptop systems could
be available that would allow for running a L1/L2 GNSS SDR continuously for 10
hours. Although these arguments are very heuristic, they indicate that a high-end
portable GNSS SDR could be realized by COTS components in the not-so-distant
future. The discussion also indicates that a laptop may not be the best hardware
platform for a GNSS SDR in terms of power consumption. A dedicated hardware
platform development would bring a significant increase in power efficiency.
For example, an embedded system like that described in the work by Toradex
delivers about 2.15 GOPS with a power consumption of around 800 mW [15]. The
board is based on an Intel PXA320 processor running at 806 MHz. A multiply-and-
add command accumulating 2 4 16-bit integers (thus performing 8 operations)
needs 3 clock ticks to be executed [16]. The ratio between peak performance and
power consumption is about six times better than for the test system of Table 9.1.
The extent to which this can be attributed to the processor architecture or to the
general system design is, however, unclear. The cell processor alone (without other
components like chip set, memory, and so forth) has a similar ratio between peak
performance and power consumption like the embedded system (see Table 9.7) but
works with 32-bit floating point numbers.
9.8 Discussion
The presented algorithms form the core of a GNSS SDR. For signal tracking, ef-
ficient implementations have been sought and, for acquisition, frequency-domain
techniques are used. Tracking and acquisition behave differently. Whereas acquisi-
tion is performed on-demand, tracking algorithms run continuously and define the
ultimate real-time behavior of a GNSS SDR.
From the discussion of Section 9.6, it is obvious that the presented reference
assembler implementation of the core (tracking) algorithms is not efficient enough
to allow a direct coding of a multichannel high-end SDR under the given premises
(like multibit signal representation and a high sample rate of, e.g., 23.104 MHz).
Although the implementation itself is optimized and uses only a few assembler in-
structions, signal generation and bit conversion execute rather slowly on the given
CPU architecture, mostly because the required assembler command (table lookup)
is not well supported by the CPU. By contrast, correlation and FFT execute suf-
ficiently fast. Modern general-purpose processors partly support the implementa-
tion of those algorithms because they provide special vector instructions, and due
to the multicore architecture they natively facilitate processing of GNSS signals in
parallel. So far, there is no support for instructions working with a small (e.g., 2–4)
number of bits, which would be particularly advantageous for processing GNSS-
signal samples. Indeed, the processors either support at minimum 8- or 16-bit in-
structions causing mainly a memory-bus bandwidth problem. Some algorithms,
however, (e.g., vector multiplication) can be efficiently implemented with 1-bit data.
Signal-processing algorithms benefit only partly from a large number of bits: cor-
relation benefits to some extent (e.g., interference cancellation), but FFT algorithms
do benefit, allowing longer FFT lengths.
The most efficient way to speed up a GNSS SDR to allow real-time operation is
higher-level optimization. Two key issues have been identified. First, massive paral-
lel correlators should be implemented via FFT techniques. Second, reuse of refer-
ence signals should be done to avoid reference-signal generation via resampling.
Both techniques in unison are the key ingredients for a real-time multifrequency
GNSS receiver.
It is essential that a GNSS SDR respects the CPU memory architecture and
tries to exploit the various CPU caches. A GNSS SDR processes a large amount
of data and reading those data from main memory would cause a drastic perfor-
mance reduction. Working with a 1-bit or 2-bit signal representation, as proposed
by Ledvina, definitely reduces the memory bus load and also allows efficient signal
correlation [4]. The main drawback of this method is, however, the cumbersome
access of signal samples because the CPU does not support direct bit access and,
9.8 Discussion 275
of course, the increased implementation losses of the receiver due to the low
number of bits.
A software radio requires a suitable hardware platform. Modern embedded (or
UMPC) processors with high computational power only need a few watts in op-
eration. This advantageous benefit is lost if the software radio runs a conventional
PC whose other components (graphic chip, hard disk, or display) need much more
power and are not needed for a software radio. For example, a laptop like that
described in Table 9.1 is definitely not the best platform to run a GNSS SDR. An
embedded system like [15], the proposed RTK system of Chapter 10, or a system
built around the cell processor yields around six times more operations per watt
than the laptop of Table 9.1.
References
[1] SiSoftware Ltd., “SiSoftware Sandra Lite (Win32 x86), 2008.1.12.34,” http://www.
sisoftware.eu/, 2007.
[2] Intel Corp., “Intel Integrated Performance Primitives v5.0 for Windows on Intel Pentium
Processors,” http://www.intel.com, 2007.
[3] Fog, A., “Instruction Tables, Lists of Instruction Latencies, Throughputs and Microopera-
tion Breakdowns for Intel and AMD CPU’s,” http://www.agner.org/optimize, 2008.
[4] Ledvina, B., et al., “Performance Tests of a 12-Channel Real-Time GPS L1 Software Re-
ceiver,” Proc. 16th Int. Technical Meeting of the Satellite Division of the Institute of Navi-
gation (ION-GPS) 2003, Portland, September 9–12, 2003, pp. 679–688.
[5] Frigo, M., and S. G. Johnson, “FFTW: Fastest Fourier Transform in the West,” http://www.
fftw.org, 2007.
[6] Cooley, J. W., and J. W. Tukey, “An Algorithm for the Machine Computation of Complex
Fourier Series,” Math. of Comput., Vol. 19, No. 1965, pp. 297–301.
[7] Diniz, P. S. R., E. A. B. da Silva, and S. L. Netto, Digital Signal Processing, System Analysis
and Design, Cambridge, U.K.: Cambridge University Press, 2002.
[8] Sagiraju, P. K., S. Agaian, and D. Akopian, “Reduced Complexity Acquisition of GPS Sig-
nals for Software Embedded Applications,” IEE Proc. Radar, Sonar, and Navigation, Vol.
153, No. 1, 2006, pp. 69–78.
[9] Akopian, D., “A Fast Satellite Acquisition Method,” Proc. 14th Int. Technical Meeting of
the Satellite Division of the Institute of Navigation (ION-GPS) 2001, Salt Lake City, UT,
September 11–14, 2001, pp. 2871–2881.
[10] Sagiraju, P. K., G. V. S. Raju, and D. Akopian, “Fast Acquisition Implementation for
High Sensitivity Global Positioning Systems Receivers Based on Joint and Reduced Space
Search,” IET Radar, Sonar & Navigation, Vol. 2, No. 5, 2008, pp. 376–387.
[11] Hofstee, H. P., “Introduction to the Cell Broadband Engine,” http://www-05.ibm.com/
e-business/uk/innovation/special/satin/nonflash/pdf/2053_IBM_CellIntro.pdf, 2005.
[12] Chow, A. C., G. C. Fussum, and D. A. Brokenshire, “A Programming Example: Large
FFT on the Cell Broadband Engine,” http://www-01.ibm.com/chips/techlib/techlib.nsf/
techdocs/0AA2394A505EF0FB872570AB005BF0F1/$file/GSPx_FFT_paper_legal_0115.
pdf, 2005.
[13] Wang, D.T., “ISSCC 2005: The CELL Microprocessor,” http://www.realworldtech.com/
page.cfm?ArticleID=RWT021005084318&p=2, 2005.
[14] Anghileri, M., et al., “Performance Evaluation of a Multi-Frequency GPS/Galileo/SBAS
Software Receiver,” Proc. 20th Int. Technical Meeting of the Satellite Division of the In-
stitute of Navigation (ION-GNSS) 2007, Fort Worth, September 25–28, 2007, pp. 2749–
2761.
276 Receiver Core Operations
[15] Toradex AG, “Colibri XScale(R) PXA320 Datasheet,” Rev. 1.3, http://www.toradex.com/
downloads/Colibri_PXA320_Datasheet_Rev_1.3.pdf, 2007.
[16] Intel Corp., “Intel XScale® Technology: Intel® Wireless MMX™ 2 Coprocessor, Program-
mers Reference Manual,” Rev. 1.5, http://download.intel.com/design/intelxscale/31451001.
pdf, 2006.
chap te r 10
In this chapter, an innovative receiver concept will be outlined. It exploits the inher-
ent software-receiver advantages to realize a real-time kinematic (RTK) positioning
system. RTK positioning relies on a roving receiver that is aided by a reference sta-
tion providing correction data to the rover in real time. It aims at a 2-cm accuracy
level, required to perform geodetic measurements, in forest areas.
The proposed RTK concept exploits the following advantages of software radio
technology:
277
278 GNSS SDR RTK System Concept
ranges from embedded solutions or help provisions for inexpensive laptops for de-
veloping countries or for educational purposes.
These devices rely on recently developed components minimizing the power
consumption. An exemplary selection of devices is listed in Table 10.1 [1, 2]. Inte-
grating these components into a single module yields a computing platform with
50%–100% computational performance as in the laptop test system of Table 9.1
[3]. However, the power consumption is drastically reduced. It should, however, be
noted that an integrated module like the one by LiPPERT supports only a simple
graphical display.
Both CPUs—the Pentium M 780 [4] and the Atom Z530 [1]—have a similar
clock rate (2.26 MHz/1.6 MHz) and support the same assembler instructions (the
Atom additionally supports SSE3 instructions). Both are single-core CPUs, but the
Atom supports hyper-threading, thus simulating two cores. The Pentium M was in-
troduced in March 2003, the Atom Z530 in April 2008. Their ratio of the thermal
design power is 27W/2.2W = 12.3.
On the other hand, Moore’s law for power consumption, as discussed in Sec-
tion 9.7, predicts a power ratio of 261/18 = 10.5, which agrees with the observation.
Here, 61 is the number of months that passed between the market introduction of
the two CPUs and Moore’s law predicts halving of the power consumption every
18 months.
Overall, a clear tendency toward low-power-consumption computing platforms
is observed, which is well-suited for the implementation of a GNSS SDR.
110 kbit/s. The availability of the EDGE-based data transfer is high. For exam-
ple, T-Mobile offers EDGE over most of Germany and other providers upgraded
their GSM-based networks with EDGE in 2008. Higher data rates can be achieved
with UMTS (downlink 384 kbit/s, uplink 64 kbit/s) or HSDPA/HSUPA (downlink
7.2 Mbit/s, uplink 3.6 Mbit/s), but the availability of these services, especially in
rural areas, is questionable. Also, other services like WiMAX (IEEE 802.16) will
offer high data rates in the future. A second important observation is the introduc-
tion of flat rates for data services. A monthly fixed price (e.g., 10 EUR per month
in the German O2 network) covers all data transfers up to a volume of 200 MByte
in 2008.
Overall, a clear tendency is observed that data-link constraints are, at least by
a factor of 10, less stringent than 10 years ago; this additional available bandwidth
should be used to transfer more and better correction data between the reference
station and the rover.
This section gives an overview of the proposed system installation, target applica-
tions, used signals, and a possible test bed.
10.2.1 Setup
The core elements of the system are the reference and the rover GNSS receiver. The
antenna of the reference receiver is installed at a fixed location with known coor-
dinates. The roving receiver moves dynamically in the vicinity of the reference re-
ceiver. The rover and reference receiver are connected by a data link. Both receivers
process signals received from the same GNSS satellites. A schematic of the proposed
system is shown in Figure 10.1.
The setup is basically identical to a conventional differential GPS system with
one reference station. The important difference lies in the fact that the rover receiver
For this discussion, we assume that the system will process GPS C/A-code mea-
surements and the pseudolites will broadcast a BOC(1,1) Galileo E1 OS-like signal.
The extension of the proposed system to other and more frequencies is possible, but
will not be discussed here. Signal parameters are summarized in Table 10.4. The use
of only one signal carrier frequency implies that the baseline length (i.e., the dis-
tance between reference receiver and rover) is limited to a few kilometers, because
ionospheric effects are not corrected. Nevertheless, the reader shall be reminded
that this is a niche-market application, providing the user with high accuracy in
an area that, so far, is not covered. Therefore, it is expected that the limited range
is acceptable. Additionally, it should be mentioned that the range of a pseudolite
signal is also limited, because the pseudolite signals require a free line of sight (at
least with respect to topographic variations).
The proposed system concept follows a conventional RTK system design, which
is not be reviewed here. Instead the reader is referred to well-known GNSS text-
books [8–10]. The roving system determines its position based on double-difference
carrier-phase measurements and fixes the carrier-phase integer ambiguities based on
integer LSQ adjustments, also using code pseudorange measurements. It is widely
recognized that the ability of an RTK system to correctly resolve double-difference
carrier-phase ambiguities is essential for centimeter-accurate positioning. The abil-
ity of the system to correctly resolve the carrier-phase ambiguities strongly depends
on the number of received and processed signals [11]. Therefore, it is required that
both receivers track all in-view signals and continuously output code-pseudorange,
Doppler, and carrier-phase measurements.
In the following section, the innovative aspects of the proposed system are de-
scribed; these are required to achieve the high navigational accuracy in a degraded
signal environment. They are based on SDR technology and include a high-sensitivity
acquisition engine, assisted tracking, and low-cost pseudolites.
The roving system works in a degraded signal environment; the GNSS signals are
especially attenuated by the canopy. Therefore, the roving receiver requires a high-
sensitivity acquisition engine. Conventional RTK GNSS receivers are typically not
equipped with such an engine because conventional carrier-phase measurements
282 GNSS SDR RTK System Concept
in a degraded signal environment have many cycle slips, or the carrier phase can-
not be tracked at all. Therefore, there is little need to acquire low-power signals.
In contrast, most mass-market GPS receivers are equipped with a high-sensitivity
engine, which is based on massive parallel correlators or on frequency-domain tech-
niques [12].
For the proposed RTK system, a special acquisition unit is designed. It is based
on several key components:
The absolute value of the clock drift of the roving receiver can be calibrated if at
least one satellite signal is acquired (or tracked), because satellite/pseudolite param-
eters are known and the user dynamics are limited to less than 2 m/s (corresponding
to a Doppler uncertainty of ±10.51 Hz at GPS L1). This allows for the calculation
of a predicted Doppler value, and the difference between the predicted minus the
observed Doppler value can be attributed to the clock drift.
An uncertainty in the user position causes a Doppler variation, which shall be
assessed below. This uncertainty is important for the satellite signals (but not for the
pseudolite signals) because the satellite moves with the speed of several km/s.
The range rate r is defined as a timely variation of the geometric distance be-
tween the transmitter located at xsat and the receiver at xrec. The transmitter and
the receiver have a velocity of vsat and vrec. The range rate r is given as the velocity
difference projected into the line of sight
1 (10.1)
r= (x sat - x rec ) × (v sat - v rec )
x sat - x rec
If we assume that the receiver position changes by an amount of dxrec, the range
rate changes will be in an amount of dr
1 (10.2)
r + δr = (x sat - x rec - δ x rec ) × (v sat - v rec )
x sat - x rec - δ x rec
which is used to obtain a first order approximation of the range rate change as
-1 -3
x sat - x rec (x sat - x rec ) × (v sat - v rec ) - x sat - x rec ((x sat - x rec ) × δ x rec )
(10.4)
-1
((x sat - x rec ) × (v sat - v rec )) - x sat - x rec δ x rec × (v sat - v rec )
-3
= r - x sat - x rec ((x sat - x rec ) × δ x rec )((x sat - x rec ) × (v sat - v rec ))
-1
- x sat - x rec δ x rec × (v sat - v rec )
The maximum absolute value of the range rate change is bounded by assuming
the in-products in (10.4) and their maximum or minimum values. Furthermore, the
receiver velocity can be ignored with respect to the satellite velocity, finally yielding
284 GNSS SDR RTK System Concept
-1 -1
δr x sat - x rec δ x rec v sat - v rec + x sat - x rec δ x rec v sat - v rec
-1 (10.5)
2 x sat - x rec δ x rec v sat
The method achieves a higher sensitivity with the same total integration
time.
Mitigation of signal interference is improved by taking into account the navi-
gation data bits. Aiding data is available for GPS C/A-code signals to perform
a data wipe-off.
Due to the long coherent integration time, the obtained frequency estimate is
precise, allowing an easy handover to tracking.
Efficient implementations exist for time-assisted acquisition.
The Doppler search space is narrow; thus, the number of Doppler bins is
limited and the computational demands are reasonable.
1 (2π )2 1
σ y2 (τ ) = (y(t + τ ) - y(t))2 = h-2 τ + h-1 2 log 2 + h0 (10.7)
2 6 2τ
Tcoh
10fRF σ y (1 fs ) � 0.5 (10.9)
fs
9.3 and for all other operations it is assumed that their performance is determined
by the memory bandwidth. This is a realistic assumption because the actual op-
eration to be performed is almost trivial; only getting and storing the values from
and into memory requires significant time. For example, to multiply the signal and
replica spectrum in the frequency domain, 3 (= two input vectors plus one output
vector) 8 (= a complex number is represented as two float numbers each 4 bytes
long) 4,194,304 (= FFT length) MBytes of data needs to be transferred, which
takes 37.5 ms for a memory bandwidth of 2.5 GByte/s.
Tables 10.8 and 10.9 give the time needed to acquire the first and following sig-
nals. Assuming that each signal acquisition is successful, four signals are acquired
after 7.4 seconds. This allows the receiver to compute a position and stop the acqui-
sition process. Note that for Table 10.9 the timing uncertainty is at the microsecond
level. Thus, the IFFT computation time is negligible.
The assistance data provided by the reference station to the rover receiver includes
information to aid tracking. The assistance data link provides:
First, the broadcast navigation data bits allow the rover receiver to remove
them and to extend the coherent integration time during tracking. More specifically,
the dependency on the broadcast navigation data bits is removed by multiplying the
rover correlation values with the data-bit values obtained through the assistance
link (data wipe-off process). The assistance data also allows limiting the region of
Doppler and code-phase values for single channels (vector-hold tracking). Double-
difference correlator values are formed for high carrier-phase tracking stability.
Vector-hold tracking and double-difference correlators are described in the follow-
ing section.
Table 10.9 Warm-Start Acquisition Execution Time (After the First Signal Has Been Acquired)
Parameter Number of Runs Time per Run Total Time
Signal generation 1 10 ms 10 ms
Resampling 1 13 ms 13 ms
Forward FFT 1 461 ms 461 ms
Frequency multiplication 24 38 ms 912 ms
Decimation 24 13 ms 312 ms
Inverse FFT 24 — —
Peak search 24 — —
Total 1,708 ms
10.5 Assisted Tracking 291
k,m k,m l ,m l ,m k, n k , n l ,n l ,n
Dϕ� = Dϕ - δ trec ω + δ trec ω + δ trec ω - δ trec ω
( l ,m
+ 2π fnom δ trec l ,n
- δ trec k, n
+ δ trec k, m
- δ trec )
k, m k , m l ,m l ,m k, n k , n l ,n l ,n
= Dϕ - δ trec ω + δ trec ω + δ trec ω - δ trec ω + 2π fnom Dδ trec (10.18)
The value of the last term 2 pf nom Ddtrec can be calculated exactly because
it is completely based on internal receiver time readings. Based on the channel
output of Figure 10.8, a carrier-phase-related exponential Pk,m is obtained from
the P-correlator CPk,m and the midpoint phase j0k,m of the reference signal (see
Section 7.3.1)
10.5.2.3 Unwrapping
The double-difference correlator principle relies on first forming the double-
difference correlator via (10.20) and synchronizing the correlator by apply-
ing (10.22). Then those correlator values are used to set up a carrier-phase
unwrapping algorithm. The advantage of (10.22) over undifferenced carrier-
phase tracking is that common mode errors like receiver and satellite clock
errors are eliminated and tracking-loop bandwidths can be chosen to be much
smaller. Also, possibly present navigation data bits can be ignored when forming
(10.20).
For simplicity, a conventional PLL is used to perform the double-difference
phase unwrapping process. A PLL unwraps and filters the carrier-phase estimates
simultaneously. PLL equations for various loop orders are readily available [19].
The PLL noise performance is characterized by the tracking loop bandwidth BPLL.
The reader should keep in mind that other unwrapping algorithms and/or filter
algorithms can also be used.
10.5 Assisted Tracking 295
The variance of the double-difference noise is twice the variance of the weak-
signal undifferenced noise. Contributions from the strong signals are disregarded.
Based on (4.88) we obtain
2 2BPLL 1 (10.26)
σ th = 1+
C / N0 2TcohC / N0 ÷
The interpolation noise is treated in a similar manner as the thermal noise. The
interpolation noise is caused by frequency-tracking errors and is then filtered by
the double-difference PLL. The frequency-tracking-error variance sω is described
in Sections 4.3.2.7 and 8.1.2. By simple error propagation based on (10.18) and
(10.25), the closed-loop interpolation noise variance is given by
2
σ in = 4TcohBPLL δ t 2 σ ω2 (10.27)
Time
2
where δ t is the RMS interpolation time. Again, we assume that the closed-
Time
loop interpolation noise error budget is dominated by the two weak signals (i.e., the
Doppler of the strong signals is sufficiently well known).
Using, for example, (4.84) to describe the Doppler-discriminator noise and con-
verting the Doppler-discriminator noise into a closed-loop frequency-tracking error
(assuming a frequency-loop bandwidth BFLL), we obtain
10.6 Low-Cost Pseudolites 297
Equation (10.30) allows calculating the maximum allowable BPLL value for a
given value of BFLL. An exemplary evaluation is shown in Figure 10.9. The figure
shows that BFLL = 1 Hz and BPLL = 10 Hz allow cycle-slip-free tracking of 20-dBHz
signals. For a slowly moving user, BPLL = 1 Hz allows tracking signals down to
13 dBHz.
The proposed positioning system can be enhanced by pseudolites if the GNSS signal
attenuation is too strong to allow precise positioning. The pseudolite signals are de-
signed to overcome an additional penetration loss of 25 dB (see Table 10.2). A key
requirement on the pseudolites is that they are of low complexity (and thus of low
cost) and that they can be installed easily. The basic idea is to set up the pseudolites
at points with known coordinates. After switching them on, they start broadcasting
their signal autonomously and independent of other pseudolites or the GPS signals.
Neither calibration nor any other complex startup procedure is required.
Pseudolite signals match well with a software radio, because the signal pro-
cessing as described for example in Chapter 9 works with 16 bits, which repre-
sents a dynamic amplitude range of 96 dB. Multibit ADCs are readily available;
for example, the ADC14L020 provides 14-bit resolution at 20 MHz with a power
consumption of 150 mW [22]. Overall, a software radio can cope well with strong
signal-power variations caused by the near–far effect and the timely variable at-
tenuation of a pseudolite signal. Furthermore, the proposed positioning system is
based on differential methods. Thus, neither the pseudolite clock error needs to
be determined, nor is there a need to broadcast a navigation data message by the
pseudolite. Because of the differential positioning method, pseudolite hardware bi-
ases are partly eliminated. Furthermore, the use of the carrier phase makes the
system robost against distortions of the signal-autocorrelation function caused by
amplitude-dependent front-end effects. A low complexity pseudolite can be used as
discussed by Manandhar [23]. In the latter work, the pseudolites act as proximity
sensors and are not used for ranging.
In Table 10.11, a possible pseudolite configuration is presented, which is adapted
from the existing transmitters placed in the GATE area [7]. However, in contrast to
the real GATE transmitters, only a pilot signal is broadcast. The broadcast signal is
not synchronized to any time frame and a comparably low quality oscillator is used.
Therefore the pseudolite can, in principle, be realized with a small FPGA for IF
signal generation plus an RF upconverter reducing components’ costs to a mini-
mum. The pseudolite can operate autonomously or, eventually, a low data-rate data
link allows control of the broadcast signal power.
The major disadvantage of pseudolite signals in comparison to satellite-borne
signals is the near-far effect. Signals received from nearby transmitters are of much
higher power than signals received from more distant transmitters. A worst-case
situation is depicted in Figure 10.10.
If we assume that the minimum distance from the rover to the pseudolites can
be limited to be larger than 50m, geometric signal-power variations on the order of
26 dB are the consequence (assuming that the inverse signal power is proportional
10.6 Low-Cost Pseudolites 299
ing it out for the channels tracking other signals [25]. Two conditions need to be
fulfilled when pseudolite signals are present:
The system must be able to detect strong pulses and mitigate the effect on
weaker signals. The degradation caused by the pulsed signals needs to be
acceptable.
The pulse repetition rate must be high enough to allow continuous-phase
tracking.
In the next section it shall be demonstrated that the proposed pulsing scheme is
able to fulfill both requirements.
10.6.2.3 Mitigation
Once a pulse has been detected, its effect has to be mitigated. As has been men-
tioned above, the advantage of a pulsed signal is the use of pulse blanking, which is
10.6 Low-Cost Pseudolites 301
Figure 10.11 Energy detector (1 ms) output receiving thermal noise plus one weak GPS C/A-code
signal (20 dBHz) plus 6 pseudolite signals. Mean single pseudolite C/N0: left = 39 dBHz, right =
49 dBHz.
TR,nom2
Tover;rep = (10.34)
DkTR,delta
10.6 Low-Cost Pseudolites 303
Figure 10.12 GPS C/A-code acquisition results, with and without the presence of pseudolite
signals.
10.7 RTK Engine 305
The rover receiver gets all reference station data and forms double-difference
code pseudoranges and double-difference correlator values. The double-difference
correlator values are used to estimate the double-difference carrier phase as de-
scribed in Section 10.5.2. Code and phase observations are processed in a Kalman-
filter to estimate the float position with the float carrier-phase ambiguity. Based
on the ambiguity covariance matrix, the Z-transform is used to de-correlate the
ambiguities and to possibly constrain the ambiguities to their integer values [11].
Proper verification is needed to avoid false ambiguity fixing. If the ambiguities are
fixed, the position is recalculated using the integer ambiguity values to obtain the
fixed solution. During the double-difference phase tracking process, cycle-slip de-
tection and eventually correction are necessary to avoid gross errors in the fixed
ambiguities.
References
[1] Intel Corp., “Intel® Atom™ processor Z5xx Series, Data Sheet,” Rev 1, Doc. No. 319535-
001US, http://download.intel.com/design/chipsets/embedded/datashts/319535.pdf, 2008.
[2] Intel Corp., “Intel® System Controller Hub US15W for Embedded Computing,” Doc.
No. 319545-002, http://download.intel.com/design/chipsets/embedded/prodbrf/319545.
pdf, 2008.
[3] LiPPERT, t. e. c., “CoreExpress™-ECO, Next Generation Computer-On-Module, Specifi-
cations,” http://www.lippertembedded.com, 2008.
[4] Intel Corp., “Intel® Pentium® M Processor with 2-MB L2 Cache and 533-MHz Front Side
Bus, Data Sheet,” Rev 2, Doc. No. 305262-002, http://download.intel.com/design/mobile/
datashts/30526202.pdf, 2005.
[5] Radio Technical Commission For Maritime Services, S. C. No. 1, “RTCM Recommended
Standards For Differential GNSS (Global Navigation Satellite Systems) Service, Version
2.3,” RTCM Paper 136-2001/SC104-STD, 2001.
[6] IFEN GmbH, “GATE Home, Hompage of the Galileo Test and Development Environ-
ment,” http://www.gate-testbed.com, 2008.
[7] Heinrichs, G., et al., “First Outdoor Positioning Results with Real Galileo Signals by Using
the German Galileo Test and Development Environment,” Proc. 20th Int. Technical Meet-
ing of the Satellite Division of the Institute of Navigation (ION-GNSS) 2007, Fort Worth,
TX, September 25–28, 2007, pp. 1576–1587.
[8] Hofmann-Wellenhof, B., H. Lichtenegger, and E. Wasle, GNSS: Global Navigation Satellite
Systems: GPS, GLONASS, Galileo & More, Vienna: Springer, 2008.
[9] Leick, A., GPS Satellite Surveying, New York: Wiley, 2004.
[10] Teunissen, P. J. G., and A. Kleusberg, GPS for Geodesy, Berlin: Springer, 1996.
[11] Tiberius, C., et al, “0.99999999 Confidence Ambiguity Resolution with GPS and Galileo,”
GPS Solutions, Vol. 6, No. 1–2, 2002, pp. 96–99.
[12] SiRF Technology, Inc., http://www.sirf.com, 2008.
[13] Wikipedia, “Network Time Protocol (NTP),” http://en.wikipedia.org/wiki/Network_Time_
Protocol, 2008.
[14] Rakon Ltd., “IT5305BE, 23.104 MHz, SMD GPS TCXO,” http://www.rakon.com,
2008.
[15] IFEN GmbH, “NavX®-NSR: GPS/Galileo Navigation Software Receiver,” Brochure,
http://www.ifen.com, 2007.
[16] López-Risueńo, G., et al., “User Clock Impact on High Sensitivity GNSS Receivers,” Proc.
ENC-GNSS 2008, Toulouse, April 22–25, 2008.
306 GNSS SDR RTK System Concept
[17] Sıçramaz Ayaz, A., T. Pany, and B. Eissfeller, “Performance of Assisted Acquisition of the
L2CL Code in a Multi-Frequency Software Receiver,” Proc. 20th Int. Technical Meeting of
the Satellite Division of the Institute of Navigation (ION-GNSS) 2007, Fort Worth, TX,
September 25–28, 2007, pp. 1830–1838.
[18] Winkel, J. Ó., Modeling and Simulating GNSS Signal Structures and Receivers, University
of Federal Armed Forces Munich, Werner-Heisenberg-Weg 39, D-85577 Neubiberg, http://
www.unibw.de/unibib/digibib/ediss/bauv, 2003.
[19] Kaplan, E. D., and C. J. Hegarty, (eds.), Understanding GPS: Principles and Applications,
2nd ed., Norwood, MA: Artech House, 2006.
[20] Anghileri, M., et al., “Performance Evaluation of a Multi-Frequency GPS/Galileo/SBAS
Software Receiver,” Proc. 20th Int. Technical Meeting of the Satellite Division of the
Institute of Navigation (ION-GNSS) 2007, Fort Worth, TX, September 25–28, 2007,
pp. 2749–2761.
[21] Gurtner, W., “RINEX: The Receiver Independent Exchange Format Version 2,” ftp://igscb.
jpl.nasa.gov/igscb/data/format/rinex2.txt, 1998.
[22] National Semiconductor, “ADC14L020 - 14-Bit, 20 MSPS, 150 mW A/D Converter from
the PowerWise® Family,” http://www.national.com, 2008.
[23] Manandhar, D., et al., “Development of Ultimate Seamless Positioning System Based on
QZSS IMES,” Proc. 21st Int. Technical Meeting of the Satellite Division of the Institute of
Navigation (ION-GNSS) 2008, Savannah, September 16–19, 2008, pp. 1698–1705.
[24] Ávila Rodríguez, J. Á., On Generalized Signal Waveforms for Satellite Navigation, Univer-
sity of Federal Armed Forces Munich, Werner-Heisenberg-Weg 39, D-85577 Neubiberg,
http://www.unibw.de/unibib/digibib/ediss/bauv, 2008.
[25] Elrod, B. D., and A. J. van Dierendonck, “Pseudolites,” in Global Positioning System:
Theory and Applications, Volume II, pp. 51–79, Parkinson, B. W., and J. J. Spilker, (eds.),
Washington, D.C.: American Institute of Aeronautics and Astronautics Inc., 1996.
[26] Kazemi, P. L., “Optimum Digital Filters for GNSS Tracking Loops,” Proc. Int. Technical
Meeting of the Satellite Division of the Institute of Navigation (ION-GNSS) 2008, Savan-
nah, GA, September 16–19, 2008, pp. 2304–2313.
[27] Spilker, J. J., Jr., “GPS Signal Structure and Theoretical Performance,” in Global Posi-
tioning System: Theory and Applications, Volume I, pp. 57–120, Parkinson, B. W., and
J. J. Spilker, (eds.), Washington, D.C.: American Institute of Aeronautics and Astronautics
Inc., 1996.
C H A P T E R 11
The software should allow better understanding of the material presented in the
main text and allow easier use for the reader. The source code makes precise refer-
ence to sections and equation numbers of the main text. The reader should open the
MATLAB scripts and the C files and work through the source code with the help
of the book. The reader should then truly understand the methods and be able to
modify them.
11.2 Setup
Running the computer programs requires only a few small steps, which are de-
scribed in the following. The software can be downloaded from http://www.
artechhouse.com/static/reslib/pany/pany1.html to a directory in your PC. In the fol-
lowing, it is assumed that the software is copied to a folder named “c:\navsigproc.”
307
308 Exemplary Source Code
11.3 Routines
The provided routines are all located in the chosen directory, for example, “c:\nav-
sigproc,” and will be described in Sections 11.3.1 through 11.3.4.
signal is not related to the settings in “init.m.” The Doppler search bin as well as the
the Doppler summation range B can be modified. The latter parameter allows the
computational burden to be reduced at the cost of reduced sensitivity.
11.3.4 S
implified Vector Tracking with Multipath Mitigation and
Spectral Whitening
The script “track.m” includes a bit-true simulation of a received and quantized
navigation signal and processes this signal with the multipath-estimating discrimi-
nator of Section 8.3. The signal is first generated by starting the script “generate
Signal.m.” After starting “track.m,” code-phase, Doppler, and carrier-phase errors
are plotted as a function of time. Additionally, the estimated number of multipath
signal components are displayed.
You may specify the used navigation signal type, the user dynamics, and the
multipath settings in the script “init.m.” Furthermore, the position and type of the
correlators can be modified within “track.m.” The underlying P- and D-correlator
reference signals can be changed in “init.m.” as well as the number of ADC bits,
sample rate, and bandwidth. The spectral characteristics of the received noise can
be selected and the method of spectral whitening (see Section 6.4) can be switched
on or off.
The script “track.m” also demonstrates how the estimated errors have to be
combined with predicted-pseudorange values. This is the basis for vector tracking,
where the prediction is based on the receiver’s position estimate.
Appendix
This appendix summarizes a few methods that are used to derive detection and
estimation algorithms in the main text.
A.1.1 Definitions
In this section, lower case letters are used to indicate all random variables. If we
have two i.i.d. real-valued random variables x and y, then we define a new complex-
valued random variable z as
z = x + iy (A.1)
The mean value <. . .> and the variance var <. . .> of z is given by
z = x +i y
(A.2)
2 2
var z = zz - z z = x 2 + y 2 - x - y = 2 var x = 2 var y
This syntax can be extended to two vectors of real valued i.i.d. random vari-
ables x and y, namely
z = x + iy (A.3)
With Q, a symmetric matrix and the hermitian conjugate z* of z, we obtain
Furthermore, we see
1 1
p(x) = L/2 1/ 2
exp - (x - µx )T Q-1(x - µ x ) (A.8)
(2π ) det(Q) 2
p(x, y)
1 1 (A.9)
= L
exp - [(x - µ x )T Q-1(x - µ x ) + (y - µ y )T Q-1(y - µ y)]
(2π ) det(Q) 2
1 1
= exp - [(z - µ z)* Q-1( z - µ z)]
(2π )L det(Q) 2
with
µz = µx + iµy (A.10)
See also Theorem 15.1 in Kay’s book for further information [2]. There the
matrix C=2Q is used yielding
1
p(z) =
π L det(C)
{ }
exp -[(z - µ z) *C -1(z - µ z)] (A.11)
We also obtain
zz* = C = 2Q (A.12)
where x is the vector of parameters and l is the deterministic part of the measure-
ments. Both are assumed to be complex-valued.
We assume the following linearized observation model
k = l(x 0 ) + ADx + v (A.14)
where the vector of parameters is split into the a priori values (initial guesses) x0
and the corrections Dx,
x = x 0 + Dx (A.15)
v = 0, vv* = Qv (
var k = k - k k* - k* )( ) = Qv (A.16)
*
(k - ADx)* Qv -1(k - ADx) = - A*Qv -1(k - ADx) = 0 (A.20)
Dx
Note, that this is a set of complex-valued equations. The derivative with respect
to Dx yields the same set of equations (but complex-conjugated). When using the
complex notation, the variables Dx and Dx* are considered to be independent (de-
terministic) variables; for example,
Dx
=0 (A.21)
Dx*
To prove this statement consider a sufficiently continuous real-valued function
f(x,y) of two real-valued unknowns. A necessary condition that it achieves a mini-
mum or maximum value, is that its derivatives with respect to x and y are zero.
Substituting x and y with two independent complex-valued variables defined by
z 1 i x
= (A.22)
*÷ 1 -i ÷ y ÷
z�
314 Appendix
results in
f f
x÷ 1 1 z÷ (A.23)
÷= ÷
f÷ i -i ÷ f ÷
y z�*
and thus
f f
x÷ z÷
÷ =0 ÷ =0 (A.24)
f÷ f ÷
y z�*
Therefore looking for the function’s extreme values with respect to z and �z * is
equal to looking for the extreme values with respect to x and y. We require that
the function f needs to be holomorphic (complex differentiable) in z and �z * as its
first derivative exists. We require that z and �z * are complex-conjugate with respect
to each other, thereby solving the original LSQ problem. This implies that the two
equations on the right-hand side of (A.24) are identical, since
The solution of (A.20) is the well known LSQ estimator equation, but now in
complex notation
The normal matrix and its inverse are hermitian (N* = N) and thus its eigen-
values are real-valued. The estimated corrections are themselves random variables
(because they depend on the observations) and from (A.18) we obtain for the ex-
pected value
(Dxˆ - (
Dxˆ ) Dxˆ * - Dxˆ * )
= N -1A*Qv -1(k * - k *
)(k - k )Qv -1AN -1 (A.29)
parameters may be natural in the case of the signal amplitude (i.e., I- and Q-
component) but is less obvious for parameters such as the code-phase or
Doppler frequency. In the latter case, it may be necessary to force the param-
eters to be real-valued by imposing boundary conditions (e.g., xk - xk = 0 ).
This is most easily achieved after the setup of the LSQ adjustment by decompos-
ing the complex-valued normal equation in its real and imaginary part. As a ca-
veat, decomposing increases the dimension of the equation system by a factor of
two. In case of an uncorrelated complex-valued variable (uncorrelated with respect
to the other variables), the following observation is very useful to circumvent the
decomposition.
Suppose the complex-valued estimated parameter xi has a variance of
1
var Dxi ,re = var (Dxi + Dxi )
2
(A.31)
-1
1 1 (N )ii
= (var Dxi + var Dxi ) = var Dxi =
4 2 2
with the same being true for the imaginary part and
Dxi,re 1 (N )
-1
ii
0
÷ (A.33)
var ÷ =
Dxi,im 2 0 (N ) -1
ii
÷
In that case forcing the imaginary part to be zero does not change the real part,
which greatly simplifies the whole adjustment procedure. The imaginary compo-
nent can simply be discarded. Note, however, that the same is generally not true if
Dxi correlates with other parameters.
{
= Tr (Qv -1 - Qv -1AN -1A*Qv -1 - Qv -1AN -1A*Qv -1 + Qv -1AN -1A*Qv -1AN -1A*Qv -1) k k *
}
= Tr {(Qv -1 - Qv -1AN -1A*Qv -1 - Qv -1AN -1A*Qv -1 + Qv -1AN -1A*Qv -1) k k * }
Qv = var k = k - k( )(k * - k *
)= kk *
- k k * (A.35)
yielding
* *
kk = Qv + k k = Qv + ADxDx*A* (A.36)
χ
{
2 = Tr (Qv - Qv AN A Qv )(Qv + ADxDx A )
-1 -1 -1 * -1 * *
}
{
= Tr (1 - Qv -1AN -1A* + Qv -1ADxDx*A* - Qv -1AN -1A*Qv -1ADxDx*A*) }
= Tr {(1 - Qv -1AN -1A* + A*Qv -1ADxDx* - A*Qv -1AN -1A*Qv -1ADxDx*)} (A.37)
= Tr {(1 - Qv -1AN -1A* + A*Qv -1ADxDx* - A*Qv -1ADxDx*)}
{ } { }
= Tr (1 - Qv -1AN -1A*) = Tr (1r - 1u) = r - u
where 1h denotes a h times h unit matrix. The symbol r denotes the number of
complex-valued observations, the symbol u the number of complex-valued esti-
mated corrections
The adjusted observations are given as
= Qv - AN -1A*
k = Qv -1 k a with k =0 (A.41)
and we obtain for χ
χ = k *k (A.42)
2
*
M= k k = Qv -1 (Qv - AN -1A*) Qv -1 = 1 - Qv -1 AN -1A* Qv -1 (A.43)
( )(
MM = 1 - Qv -1 AN -1A* Qv -1 1 - Qv -1 AN -1A* Qv -1 )
= 1 - Qv -1 AN -1A* Qv -1 - Qv -1 AN -1A* Qv -1 + Qv -1 AN -1A* Qv -1 Qv -1 AN -1A* Qv -1
= 1 - Qv -1 AN -1A* Qv -1 = M
(A.44)
Furthermore, it is hermitian because
M* = 1 - Qv -1 AN -1A* Qv -1 = M (A.45)
DD* = 1 (A.47)
and a diagonal matrix L. Since M is hermitian and idempotent, its eigenvalues can
either be one or zero.
318 Appendix
*
k k = D*MD = L (A.49)
where the L is a diagonal matrix containing the eigenvalues of M, and the main
diagonal is populated either by zeros or ones. Thus the elements of k are uncor-
related complex-valued random variables whose variance is either zero or one. If
we express the expected value of c as a function of k , we obtain
r
χ * *
D*Dk * (A.50)
2 = k k = k = k k = L jj
j =1
We know from (A.37) that this equation also evaluates to r–u and we conclude
that the vector k includes exactly r–u random variables with unit variance. The
other u elements are random variables of zero variance.
So far, we made no assumption on the actual distribution of our measurement
errors. However, to obtain a quality measure for the LSQ adjustment, we assume
that the measurement errors v are Gaussian. Because the operations to this point
involving random variables were linear, also the random variables contained in k
are either complex Gaussian variables with unit variance or they vanish. Thus, for
χ we obtain [see (A.2)]
r r- u
χ = 2k *
k =2 (kj kj ) = (2(kj, re)2 + 2(k j, im)2) (A.51)
j =1 j =1
The minimum cost c is the sum of 2(r–u) real-valued random variables, each
having unit weight. Therefore χ is distributed according to the chi-squared distribu-
tion with 2(r–u) degrees of freedom, which is formally written as
2
χ ∼ χ 2(r - u) (A.52)
Compared to the LSQ adjustment with real-valued observations and real-
valued estimated parameters, we observe that the degree of freedom, the effective
number of parameters and the effective number of observations, doubles.
A.1.6 Example
To illustrate the complex LSQ adjustment method, let’s assume we have the follow-
ing simplified model for a correlator output l.
li = aR(τ + τ i ) (A.53)
Here li is the complex valued correlator output for a correlator with an offset of
ti with respect to the prompt correlator value. The received code phase is symbol-
ized as t. The symbol a denotes the complex-valued signal amplitude.
A.1 Complex Least-Squares Adjustment 319
a
x= (A.55)
τ÷
a0
x0 = (A.56)
0÷
Assuming both observations are uncorrelated and each have unit weight, the
normal matrix is given by
1
0
2R(τ 1)2 ÷
-1 ÷
N = (A.60)
1 ÷
0 2 2÷
2 a0 R (τ 1)
In that case, forcing the imaginary part of the code-phase error estimate to van-
ish will not influence the other parameters. In fact, the complex-valued estimated
value for the code phase is given by
320 Appendix
a0
Dτˆ = (N -1A*k )2 = 2
(R (τ 1)k1 + R (-τ1)k2 )
2 a0 R (τ 1)2 (A.61)
1 1
= (k1 - k2 ) = (l1 - l2 )
2a0R (τ 1) 2a0R (τ 1)
A.1.7 Discussion
Using complex variables in the LSQ adjustment potentially simplifies expressions
as the carrier-phase error dependence involving sine/cosine terms can be avoided
by using a complex-valued signal amplitude and observations. This reduces the
dimension of the involved matrices by a factor of 2. The expressions are especially
useful if the estimated parameters are uncorrelated because, in that case, the re-
quirement of real-valued parameters is very easily formulated. Otherwise, those
conditions have to be introduced as boundary conditions after forming the normal
matrix.
of the output from a complex mixer bringing the navigation signal from the RF to
the IF or even to baseband.
For complex-sampling, the Nyquist criterion reads that the sampling frequency
must be higher than the dual-sided signal bandwidth. The received samples are ran-
dom variables composed of a deterministic part rm and a stochastic-noise part Nm,
Sµ = r µ + N µ (A.62)
This form is assumed to be valid for short time spans and during that time span
the four signal parameters a,t, ω, and φ are assumed to be constant. The sampled
noise can be assumed to be white and is then modeled as
N µ N υ = 2δ µ, υ (A.64)
In case the input signal samples are complex-valued, the total signal power Psig
is given by
Psig = a2 (A.65)
The total noise power Pnoise for a complex-valued input signal is two, as can be
seen from (A.64), and the sample rate equals the (dual-sided) noise bandwidth B.
The (dual-sided) noise power spectral density N0 is thus
Pnoise 2 (A.66)
N0 = =
B fs
and the C/N0 value is
Psig fs a2
C / N0 = = (A.67)
N0 2
2C /N0 2C/N0
a2 = = afs = 2C /N0B (A.68)
fs B
Sµ = r µ + N µ (A.69)
The real-valued signal samples share the same signal model as the complex
samples and their relation is expressed as
N µ Nυ = δ µ ,υ (A.72)
In case the input signal samples are real-valued, the total signal power Psig is
given by
a2
Psig = (A.73)
2
The (single-sided) noise bandwidth B is given by half of the sample rate fs (Ny-
quist criterion). Note, for a real-valued signal, only positive frequency values of its
spectrum are considered. The complex-conjugated, but otherwise symmetric, part
of the spectrum located at negative frequencies is irrelevant. Because we assume that
the real part of the noise samples has unit variance, the total noise power Pnoise for a
real-valued input signal is one, as can be seen from (A.72). The (single-sided) noise
power spectral density N0 is
Pnoise 2
N0 = = (A.74)
B fs
The C/N0 value is defined as the ratio between received signal power and noise
spectral density
Psig fs a2
C / N0 = = (A.75)
N0 4
or equivalently the following expressions are occasionally helpful
4C/N0 afs
a2 = = 2C /N0B (A.76)
fs 2
signal. For the purpose of navigation signal processing, this can be ignored as long
as the main lobes of the navigation signal are not located at that frequency. In that
case, a possible conversion would, more or less, only affect the noise part of the
signal, which can be ignored.
Neither of the conversions change a or n0.
This appendix outlines a proof, that the cross-correlation function of two wide-
sense stationary stochastic processes is invariant under the sampling process. It
implies that the sampling frequency does not influence the shape of the cross-
correlation function. Aliasing effects average out because of the wide-sense station-
ary property.
Let S(t) and R(t) be two complex-valued continuous stochastic processes. Both
processes are wide-sense stationary. This implies that first- and second-order mo-
ments are constant in time [4].
The continuous-time cross-correlation function is defined as
S(t - τ )R(t) = RS,R (τ ) (A.77)
S,R
Rµ = R(tµ ) (A.78)
T T
1 1
S(t - τ )R(t)dt = S(t - τ )R(t) dt = RS,R (τ ) (A.80)
2T t =-T 2T t =-T S,R
S,R
L L
1 1
S(t µ - τ µ )R(t µ ) = S(t µ - τµ )R(t µ ) = RS ,R (τ µ ) (A.81)
2L + 1 µ =- L 2L + 1 µ =- L S,R
S,R
processes. Ergodic sampling does not necessarily imply that the sample rate is larger
than the Nyquist rate.
An obvious question arises, why aliasing effects—which occur if sub-Nyquist
sampling is used—do not influence this result? In the following example, it will be
shown that aliasing effects exist, but they average out when the correlation function
is formed.
First, the Fourier transform of a stochastic process is given as
�(ω) = 1
R R(t)exp{-iω t}dt (A.82)
2π t =-
� (ω ) is a complex-valued ran-
For a specific angular frequency w, the quantity R
dom variable.
For two stochastic wide-sense stationary processes, the correlation between two
Fourier transforms is
1
S�(ω)R
�(ω ) = S(t)R(t ) S,R
exp{-iω t - i ω t }dtdt
S,R 2π t ,t =-
1
= S(t - t)R(t ) S,R
exp{-iω(t - t) - iω t }dtdt
2π t ,t =-
1 (A.83)
= RS,R (t)exp{-iω(t - t) - iω t }dtdt
2π t ,t =-
1
= RS,R (t)exp{-it (ω + ω ) + iωt}dtdt
2π t ,t =-
has been used to represent Dirac’s delta function. Overall, the correlation between
the two Fourier transforms vanishes if both frequencies do not add up to zero.
If only discrete-time values τν of the cross-correlation function are considered
ν
τν = (A.85)
fs
then the expected value of the timely averaged continuous-time correlation function
for the shift index v is given by
328 Appendix
1
= S�(ω)R
�(ω ) exp{iω (t - τυ ) + iω t}dω dω dt
2π t =- ω ,ω =-
S,R
= S�(ω)R
�(ω ) exp{-iωτ υ }δ (ω + ω )dω dω (A.86)
S,R
ω ,ω =-
= S�(ω)R
�(- ω) exp{-iω τυ }dω
S,R
ω =-
π fs
� � � �
= S�(ω + 2π mfs )R
�(-ω - 2π mfs ) exp{-iωτυ }dω
S,R
m =- ω� =-π fs
�
Here the auxiliary angular frequency ω has been introduced,
� (A.87)
ω = ω + 2π mfs
The index m is used to count the continuous-time frequencies w, which are all
�
aliased after sampling onto the same discrete-time frequency ω .
On the other hand, the expected value of the timely averaged discrete-time cor-
relation function is given by
S(t µ - τ υ )R(t µ )
S,R
µ =-
1
= S�(ω)R
�(ω ) exp{iω (t µ - τυ) + iω t µ }dω dω
2π µ =- ω ,ω =-
S,R
1
= S�(ω)R
�(ω ) exp{-i ωτυ } exp{iω t µ + i ω t µ }dω dω
2π ω ,ω =-
S,R
µ =-
(A.88)
= S�(ω)R
�(ω ) exp{-i ωτ υ } δ (ω - (2π kfs - ω ))dωd ω
S,R
ω ,ω =- k =-
= S�(ω)R
�(2π kfs - ω ) exp{-i ωτυ }dω
S,R
k =- ω =-
S(tµ - τ υ )R(tµ )
S,R
µ =-
(A.90)
π fs
� �(-2π k fs - ω� ) � �
= S�(ω + 2π mfs )R exp{-iω τυ }dω
S,R
m,k =- ω� =- π fs
� �(-2π k fs - ω� )
m k S�(ω + 2π mfs )R =0 (A.91)
S,R
of (A.83).
Overall, the frequency domain picture of aliasing effects on the cross-
correlation function can be described as follows: aliasing effects occur during
sampling, causing correlation of frequency components, which do not correlate
in the continuous-time case. However, the ensemble average of the aliasing effects
averages to zero, because wide-sense stationary processes have a diagonal covari-
ance matrix in the frequency domain.
This appendix summarizes some useful formulas for the reader’s convenience.
A.4.1.1 Continuous-Time
Let x(t) be a complex valued function of time t in seconds, then it is related to its
�(w)(unit [rad/s]) via
angular frequency Fourier transform x
1
x(t) = �(ω)exp{iω t}dω
x (A.92)
2π -
and
1
�(ω) =
x x(t)exp{-iωt}dt (A.93)
2π -
330 Appendix
�(f ) =
x x(t)exp{-2π ift}dt (A.96)
-
The Fourier transform x �(f ) (unit [1/ Hz]) is related to the angular frequency
�
x
Fourier transform (w) (in radians per seconds) via
�(ω) = 2π x
x �(f ) (A.98)
2π if fs fs
�(f ) =
x x µ exp - µ <f (A.100)
fs 2 2
µ =-
Dirac’s delta function equivalent, the Kronecker delta and the Dirac comb, are
expressed as
fs
2
1 2π if 2π if
exp µ df = δ µ ,0, exp - µ = fs δ (f - kfs ) (A.101)
fs fs µ =-
fs k =-
f =- fs
2
A.4 Useful Formulas 331
1 L -1 µk
xµ = �k exp 2π i
x (A.102)
L k=0 L
and
L -1
µk
x�k = x µ exp -2π i (A.103)
µ =0
L
where L is the number of samples involved in the summation. The specific values of
the epochs tm are context-specific but they are usually equally spaced. The range of
possible τ values is much smaller than the time span covered by tm ,
τ � tL - t1 (A.106)
1 L 1 L
Ry ,x (τ ) y (t µ - τ )x (t µ ) x (t µ + τ )y (t µ ) Rx ,y (-τ) (A.107)
L µ =1 L µ =1
If x (t) and y (t) denote the first derivatives of x(t) and y(t), then the correlation
functions involving derivatives are approximated as
1 L 1 L
Rx ,y (τ ) x (t µ - τ )y(t µ ) = - x(t µ - τ )y(t µ ) -Rx,y (τ)
L µ =1 τL µ=1
(A.108)
1 L 1 L
Rx,y (τ ) x(t µ - τ )y (t µ ) x(t µ )y(t µ + τ ) Rx,y (τ)
L µ =1 τ L µ =1
332 Appendix
1 L
x(tµ - τ 1)y(tµ - τ 2 ) Rx,y (τ1 - τ 2 )
L µ =1
1 L
x (t µ - τ 1)y(tµ - τ 2 ) Rx ,y (τ 1 - τ 2 ) -Rx,y (τ 1 - τ 2 ) (A.110)
L µ =1
1 L
x (t µ - τ 1)y(tµ - τ 2 ) Rx , y (τ1 - τ 2) Rx,y (τ1 - τ 2 )
L µ =1
1 L 1 L
Rx ,x (τ ) x (t µ - τ )x (t µ ) x (t µ )x (t µ + τ ) Rx ,x (- τ)
L µ =1 L µ =1
(A.111)
Im{Rx ,x (0)} 0
2 2
Rx, x (τ ) = Rx, x (τ ) Rx, x (-τ ) = Rx, x (-τ )
τ2 τ2
(A.113)
Im{Rx, x (0)} 0
1 L
Rx ,y (τ ; h) x (t µ - τ )y (t µ )h(tµ )
L µ =1
Rx ,y (τ ) L x (t µ - τ )y (t µ )
= h(t µ )
L µ =1 Rx ,y (τ )
Rx ,y (τ ) L x (t µ )y (t µ) (A.114)
h(t µ )
L µ =1 Rx ,y (0)
t L
1
Rx ,y (τ ) W (t )h(t )dt
t L - t 0 t =t
0
with
x(t)y(t)
W (t) = (A.115)
Rx,y (0)
This formula is based on the assumption that the product of the two signals
fulfills
tL
(x(t - τ )y(t)Rx,y (0) - x(t)y(t)Rx,y (τ ))h(t)dt 0 (A.116)
t = t0
sufficiently well. Because this is a nontrivial assumption for the functions x, y and
h, its validity has to be verified explicitly.
The function W(t) identifies the timely occurrence of the cross-correlation en-
ergy. If x(t) and y(t) are realizations of two wide-sense stationary processes, then
W(t) is well approximated by one.
The formula for correlation with an auxiliary function is summarized as
Rx,y (0; h)
Rx, y (τ ; h) Rx, y (τ ) (A.117)
Rx, y (0)
and indicates the required independence of the auxiliary function from the correla-
tion function.
For x(t) and y(t) being realizations of wide-sense stationary processes (i.e.,
W(t) 1) the function κ(ω) is well approximated as
i(eiωt0 - eiω tL )
κ (ω ) (A.120)
(tL - t0 )ω
For a symmetrically chosen summation interval of duration T
t L = -t0 = T / 2 (A.121)
and x(t) and y(t) being realizations of wide-sense stationary processes, the function
κ(w) is
ωT
κ (ω ) sinc ÷ (A.122)
2
If x and y equal the Gaussian pulse sequence defined in (1.44), then the correla-
tion energy function is
2 2
(t - a - d)2 (t - a + d)2
c02 exp - + c02 exp -
2b2 2b2
W (t)
Rc,c (0) (A.123)
(t - a - d)2 (t - a + d)2
exp - + exp -
b2 b2
2b b
under the parameter assumptions of Section 1.9.4.
The function κ(ω) is for the Gaussian pulse sequence approximated as
b2ω 2 (A.124)
κ (ω ) exp{iω a}exp - cos(dω )
4
forms and spectra of many modernized continuous-time GNSS signals can be found
in the thesis by Ávila Rodríguez [5].
The discrete sum yielding the correlation function is approximated by an inte-
gral as
1 L 1 L
Rx ,y (τ ) x (tµ - τ )y (tµ ) = x(tµ - τ ) y(t µ)D t
L µ =1 LDt µ =1
(A.125)
tL
1
x (t - τ )y (t )dt
t L - t1 t = t
1
T
1
Rx, y (τ ) lim x(t - τ )y(t)dt
T 2T t =-T
T
1
= lim �(ω )y�(ω )exp{iω (t - τ ) + iω t}dω dω dt
x
T 4π T t =-T ω ,ω =-
T
1
= lim �(ω )y�(ω )exp{-iω τ }
x exp{it(ω + ω )}dtdω dω
T 4π T ω ,ω =- t =-T
(A.126)
Exchanging limes and integration yields the following expression
T
1 1 ω = -ω
lim exp{it(ω + ω )}dt = (A.127)
T 2T t =-T 0 ω -ω
1
Rx,y (τ ) �(- ω)y�(ω )exp{i ωτ}dω
x (A.128)
2π ω =-
σ (x - µ )2 (A.133)
QN ; µ,σ 2 (x) = pN ; µ,σ 2 (t)dt exp -
x
(x - µ) 2π 2σ 2
2Q( x ) υ =1
(x 2)
k
υ 2 -1
Qχ 2 ; υ (x) = exp(- x 2) υ=2 (A.138)
k=0
k!
(υ -1) 2
(k - 1)!(2 x)k -1 2
2Q ( x)+ exp(-πx 2) k =1
(2k - 1)!
υ 2
has a noncentral chi-squared distribution with ν degrees of freedom and the non-
centrality parameter l defined as
υ
λ= µ k2 (A.140)
k =1
υ -2
1 x 4 1 (A.141)
pχ 2 ;υ , λ (x) = ÷ exp - (x + λ) I υ ( λ x )
2 λ 2 2
-1
and vanishes for negative values of x. The symbol Ir denotes the modified Bessel
function of the first kind and order r. The mean and variance are
x =υ+λ
(A.142)
var x = (x - x ) 2
= 2υ + 4λ
x -υ - λ
Qχ 2 ;υ , λ (x) Q ÷ (A.143)
2υ + 4 λ
338 Appendix
has a noncentral chi-squared distribution with 2ν degrees of freedom and the non-
centrality parameter l as
υ
2
λ= µk (A.145)
k =1
The same formulas for the density function and moments as in Section A.3.6.3
apply.
References
[1] Leick, A., GPS Satellite Surveying, New York: Wiley, 2004.
[2] Kay, S. M., Fundamentals of Statistical Signal Processing: Estimation Theory, Englewood
Cliffs: Prentice Hall, 1993.
[3] Diniz, P. S. R., E. A. B. da Silva, and S. L. Netto, Digital Signal Processing, System Analysis
and Design, Cambridge, U.K.: Cambridge University Press, 2002.
[4] Porat, B., Digital Processing of Random Signals, Theory & Methods, Englewood Cliffs,
NJ: Prentice-Hall, 1994.
[5] Ávila Rodríguez, J. Á., On Generalized Signal Waveforms for Satellite Navigation. Univer-
sity of Federal Armed Forces Munich, Werner-Heisenberg-Weg 39, D-85577 Neubiberg,
http://www.unibw.de/unibib/digibib/ediss/bauv, 2008.
[6] Kay, S. M., Fundamentals of Statistical Signal Processing: Detection Theory, Englewood
Cliffs, NJ: Prentice-Hall, 1998.
Abbreviations
OS Open Service
OS Operating System
PaC-SDR Parameter-Controlled SDR
PC Personal Computer
PCI Peripheral Component Interconnect
PDA Personal Digital Assistant
PL Pseudolite
PLD Programmable Logic Device
PLL Phase Lock Loop
PND Personal Navigation Device
POSIX Portable Operating System Interface
PPE Power Processor Element
PRN Pseudorandom noise
PSD Power Spectral Density
PVT Position, Velocity and Time
QPSK Quadratur Phase Shift Keying
R&D Research and Development
RF Radio Frequency
RINEX Receiver INdependent EXchange Format
RMS Root Mean Square
RTCM Radio Technical Commission for Maritime services
RTK Real-Time Kinematic
SATCOM SATellite COMmunications
SCA Software Communications Architecture
SDR Software-Defined Radio
SI Système International d’Unités
SINCGARS SINgle Channel Ground and Airborne Radio System
SIS Signal In Space
SISNET Signal-In-Space NETwork
SMA Sub-Miniature-A
SNR Signal-to-Noise Ratio
SPE Synergistic-Processor Element
SPI Serial Peripheral Interface
SPS Standard Positioning Service
SRW Soldier Radio Waveform
SSP Synchronous Serial Port
TCP Transmission Control Protocol
TCRBL True CRLB
TCXO Temperature Controlled Crystal Oscillator
TDMA Time Division Multiple Access
TOA Time of Arrival
TTFF Time to First Fix
UHF Ultra-High Frequency
UMP Uniform Most Powerful
UMPC Ultra-Mobile PC
UMPUT UMP Unbiased Test
UMTS Universal Mobile Telecommunications System
342 Abbreviations
The following conventions are generally used for mathematical symbols: Lower
case variables denote deterministic quantities (e.g. r, s) and upper case symbols de-
note random variables (e.g. R, S). Estimates of deterministic but unknown quanti-
ties like t̂, p̂ are themselves random variables but retain lower case symbols. A bold
symbol like n denotes a vector having the elements nm. Matrices use italic upper case
letters, like A or I.
Unless otherwise noticed, SI units are used. Code phases, delays or pseudo
ranges are expressed in [s], distances in [m], angles and carrier phases in [rad],
frequencies in [Hz] and angular frequencies in [rad/s].
Symbol Description
n Noise (part of received signal samples)
p Low rate pseudorange parameters
q High rate pseudorange parameters
r Deterministic signal (part of received signal samples)
s Received signal samples
x Position parameters
ξ Nuisance parameters
a Signal amplitude
c Speed of light [299,792,458 m/s]
c(t) Baseband representation of the navigation signal
C/N0 Carrier to noise power density ratio [Hz]
fs Sample rate in [Samples/s]
i Imaginary unit
I Fisher information matrix
L Number of samples used for coherent integration
L(s) Likelihood ratio
N(q, I) Normally distributed random variables, with mean q and covariance
matrix I
P(X > g ) Probability that the random variable X is larger than g
p(n) Probability density function for the random variables N
Rx,y(t ) Cross-correlation function of signal x and y
tm Time of reception for sample sm in [s]
Tcoh Coherent integration time in [s]
t Code phase in [s]
343
344 List of Symbols
Operations
Symbol Description
f ( N ) = f (n) dµ (n) Expected value of the function f with re-
N
spect to the random variables N with the
= f (n) p (n) P dnµ probability density function p(n)
µ
<=> Equivalence
About the Author
345
Index
347
348 Index
Q
O Quadrature phase shift keying, 149
Observation model, 313 Quantization, 163, 173
Open Service of Galileo, 13
Optimal estimator, 118
Overlapping pulses, 302 R
Oversampling, 181 Radar, 5
Radio navigation, 1
Random sample access, 38
P Real valued signal, 189, 321
P(Y)-code of GPS, 197, 200, 204 Real time, 34, 37
Parameter-controlled SDR, 17 Real-time kinematic positioning, 277, 304
Particle filter, 113, 238 Receiver autonomous integrity monitoring,
PC clock, 35, 41 43
P-correlator, 10, 108, 110, 116, 192, 217, Reconfigurable ASIC, 18
221, 223 Recovery time, 175
Penalty factor, 156 Reference station, 29, 33, 50, 279, 289
People tracking, 280 Reflectometry, 27
Personal navigation device, 25 Relative signal-to-noise ratio, 166, 169
Index 351
V
X
Variance of least-squares estimates, 79
XOR bit operation, 242, 250
Vector instructions, 249
Vector tracking, 95, 238, 291
Vector-hold tracking, 290 Z
Very long baseline interferometry, 200 Zero padding, 254