Channel Coding Techniques For Wireless C

Download as pdf or txt
Download as pdf or txt
You are on page 1of 407

K.

 Deergha Rao

Channel Coding
Techniques
for Wireless
Communications
Channel Coding Techniques for Wireless
Communications
K. Deergha Rao

Channel Coding Techniques


for Wireless
Communications

123
K. Deergha Rao
Research and Training Unit
for Navigational Electronics,
College of Engineering
Osmania University
Hyderabad, Telangana
India

ISBN 978-81-322-2291-0 ISBN 978-81-322-2292-7 (eBook)


DOI 10.1007/978-81-322-2292-7

Library of Congress Control Number: 2015930820

Springer New Delhi Heidelberg New York Dordrecht London


© Springer India 2015
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained
herein or for any errors or omissions that may have been made.

Printed on acid-free paper

Springer (India) Pvt. Ltd. is part of Springer Science+Business Media (www.springer.com)


Consulting Editor
M.N.S. Swamy, Concordia University
To
My parents Boddu and Dalamma,
My beloved wife Sarojini,
and My mentor Prof. M.N.S. Swamy
Preface

The life of people has changed tremendously in view of the rapid growth of mobile
and wireless communication. Channel coding is the heart of digital communication
and data storage. The traditional block codes and conventional codes are commonly
used in digital communications. To approach the theoretical limit for Shannon’s
channel capacity, the length of a linear block code or constant lengths of convo-
lutional codes have to be increased, which in turn makes the decoder complexity to
become high and may render it physically unrealizable. The powerful turbo and
LDPC codes approach the theoretical limit for Shannon’s channel capacity with
feasible complexity for decoding. MIMO communications is a multiple antenna
technology which is an effective way for high speed or high reliability communi-
cations. The MIMO can be implemented by space-time coding. However, a single
book which can serve as a textbook for Bachelor and Master students on this topic
is lacking in the market.
In this book, many illustrative examples are included in each chapter for easy
understanding of the coding techniques. An attractive feature of this book is the
inclusion of MATLAB-based examples with codes to encourage readers to
implement on their personal computers and become confident of the fundamentals
and gain more insight into coding theory. In addition to the problems that require
analytical solutions, MATLAB exercises are introduced to the reader at the end of
each chapter.
The book is divided into 11 chapters. Chapter 1 introduces the basic elements of
a digital communication system, statistical models for wireless channels, capacity
of a fading channel, Shannon’s noisy channel coding theorem and the basic idea of
coding gain. Chapter 2 gives an overview of the performance analysis of different
modulation techniques, and also deals with the performance of different diversity
combining techniques in a multi-channel receiver. Chapter 3 introduces Galois
fields and polynomials over Galois fields. Chapter 4 covers linear block codes
including RS codes because of their popularity in burst error correction in wireless
networks. Chapter 5 discusses the design of a convolutional encoder and Viterbi
decoding algorithm for the decoding of convolutional codes, as well as the
performance analysis of convolutional codes over AWGN and Rayleigh fading

ix
x Preface

channels. In this chapter, punctured convolutional codes are also discussed.


Chapter 6 provides a treatment of the design of turbo codes, BCJR algorithm for
iterative decoding of turbo codes, and performance analysis of turbo codes. Chapter
7 focuses on the design and analysis of Trellis-coded modulation schemes using
both the conventional and turbo codes. Chapter 8 describes the design of low parity
check codes (LDPC), decoding algorithms and performance analysis of LDPC
codes. The erasure correcting codes like Luby transform (LT) codes and Raptor
codes are described in Chap. 9. Chapter 10 provides an in-depth study of multiple-
input multiple-output (MIMO) systems in which multiple antennas are used both at
the transmitter and at the receiver. The design of space-time codes and imple-
mentations of MIMO systems are discussed in Chap. 11.
Salient features of this book are as follows:
• Provides comprehensive exposure to all aspects of coding theory for wireless
channels with clarity and in an easy way to understand
• Provides an understanding of the fundamentals, design, implementation and
applications of coding for wireless channels
• Presents illustration of coding techniques and concepts with several fully worked
numerical examples
• Provides complete design examples and implementation
• Includes PC-based MATLAB m-files for the illustrative examples are included in
the book.
The motivation in writing this book is to include modern topics of increasing
importance such as turbo codes, LDPC codes and space-time coding in detail, in
addition to the traditional RS codes and convolutional codes, and also to provide a
comprehensive exposition of all aspects of coding for wireless channels. The text is
integrated with MATLAB-based programs to enhance the understanding of the
underlying theories of the subject. These MATLAB codes are free to download
from the book’s page on Springer.com.
This book is written at a level suitable for undergraduate and master students in
electronics and communication engineering, electrical and computer engineering,
computer science, and applied physics as well as for self-study by researchers,
practicing engineers and scientists. Depending on the chapters chosen, this text can
be used for teaching a one or two semester course on coding for wireless channels.
The prerequisite knowledge of the readers in principles of digital communication is
expected.

K. Deergha Rao
Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Digital Communication System. . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Wireless Communication Channels . . . . . . . . . . . . . . . . . . . . 1
1.2.1 Binary Erasure Channel (BEC) . . . . . . . . . . . . . . . . . 1
1.2.2 Binary Symmetric Channel (BSC). . . . . . . . . . . . . . . 2
1.2.3 Additive White Gaussian Noise Channel . . . . . . . . . . 3
1.2.4 Gilbert–Elliott Channel . . . . . . . . . . . . . . . . . . . . . . 3
1.2.5 Fading Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.6 Fading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Statistical Models for Fading Channels . . . . . . . . . . . . . . . . . 6
1.3.1 Probability Density Function of Rician
Fading Channel . . . . . . . . . . . . . . . . . . . . . . . .... 6
1.3.2 Probability Density Function of Rayleigh
Fading Channel . . . . . . . . . . . . . . . . . . . . . . . .... 6
1.3.3 Probability Density Function of Nakagami
Fading Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Channel Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.1 Channel Capacity of Binary Erasure Channel . . . . . . . 9
1.4.2 Channel Capacity of Binary Symmetric Channel. . . . . 9
1.4.3 Capacity of AWGN Channel . . . . . . . . . . . . . . . . . . 9
1.4.4 Channel Capacity of Gilbert–Elliott Channels . . . . . . . 11
1.4.5 Ergodic Capacity of Fading Channels . . . . . . . . . . . . 11
1.4.6 Outage Probability of a Fading Channel . . . . . . . . . . 13
1.4.7 Outage Capacity of Fading Channels. . . . . . . . . . . . . 14
1.4.8 Capacity of Fading Channels with CSI
at the Transmitter and Receiver . . . . . . . . . . . . .... 15
1.5 Channel Coding for Improving the Performance
of Communication System . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5.1 Shannon’s Noisy Channel Coding Theorem . . . . . . . . 16
1.5.2 Channel Coding Principle . . . . . . . . . . . . . . . . . . . . 16
1.5.3 Channel Coding Gain . . . . . . . . . . . . . . . . . . . . . . . 16

xi
xii Contents

1.6 Some Application Examples of Channel Coding . . . . . . . . . . . 17


1.6.1 Error Correction Coding in GSM . . . . . . . . . . . . . . . 17
1.6.2 Error Correction Coding in W-CDMA. . . . . . . . . . . . 18
1.6.3 Digital Video Broadcasting Channel Coding. . . . . . . . 18
1.6.4 Error Correction Coding in GPS L5 Signal . . . . . . . . 19
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2 Performance of Digital Communication Over


Fading Channels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 21
2.1 BER Performance of Different Modulation Schemes
in AWGN, Rayleigh, and Rician Fading Channels . . . . . . ... 21
2.1.1 BER of BPSK Modulation in AWGN Channel. . . ... 22
2.1.2 BER of BPSK Modulation in Rayleigh
Fading Channel . . . . . . . . . . . . . . . . . . . . . . . . ... 22
2.1.3 BER of BPSK Modulation in Rician
Fading Channel . . . . . . . . . . . . . . . . . . . . . . . . ... 23
2.1.4 BER Performance of BFSK in AWGN, Rayleigh,
and Rician Fading Channels . . . . . . . . . . . . . . . . ... 24
2.1.5 Comparison of BER Performance of BPSK,
QPSK, and 16-QAM in AWGN and Rayleigh
Fading Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2 Wireless Communication Techniques . . . . . . . . . . . . . . . . . . 28
2.2.1 DS-CDMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.2 FH-CDMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.3 OFDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2.4 MC-CDMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.3 Diversity Reception. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3.1 Receive Diversity with N Receive Antennas
in AWGN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.4 Diversity Combining Techniques . . . . . . . . . . . . . . . . . . . . . 40
2.4.1 Selection Diversity . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4.2 Equal Gain Combining (EGC) . . . . . . . . . . . . . . . . . 42
2.4.3 Maximum Ratio Combining (MRC) . . . . . . . . . . . . . 42
2.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.6 MATLAB Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3 Galois Field Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49


3.1 Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.2 Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.3 Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.4 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.5 Elementary Properties of Galois Fields . . . . . . . . . . . . . . . . . 52
3.6 Galois Field Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Contents xiii

3.6.1 Addition and Subtraction of Polynomials . . . . . . . . . . 52


3.6.2 Multiplication of Polynomials. . . . . . . . . . . . . . . . . . 53
3.6.3 Multiplication of Polynomials Using MATLAB . . . . . 53
3.6.4 Division of Polynomials . . . . . . . . . . . . . . . . . . . . . 54
3.6.5 Division of Polynomials Using MATLAB . . . . . . . . . 55
3.7 Polynomials Over Galois Fields . . . . . . . . . . . . . . . . . . . . . . 55
3.7.1 Irreducible Polynomial. . . . . . . . . . . . . . . . . . . . . . . 56
3.7.2 Primitive Polynomials . . . . . . . . . . . . . . . . . . . . . . . 56
3.7.3 Checking of Polynomials for Primitiveness
Using MATLAB. . . . . . . . . . . . . . . . . . . . . . ..... 56
3.7.4 Generation of Primitive Polynomials
Using MATLAB. . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.8 Construction of Galois Field GF(2m) from GF(2) . . . . . . . . . . 58
3.8.1 Construction of GF(2m), Using MATLAB . . . . . . . . . 63
3.9 Minimal Polynomials and Conjugacy Classes of GF(2m) . . . . . 65
3.9.1 Minimal Polynomials . . . . . . . . . . . . . . . . . . . . . . . 65
3.9.2 Conjugates of GF Elements . . . . . . . . . . . . . . . . . . . 65
3.9.3 Properties of Minimal Polynomial . . . . . . . . . . . . . . . 66
3.9.4 Construction of Minimal Polynomials . . . . . . . . . . . . 67
3.9.5 Construction of Conjugacy Classes
Using MATLAB. . . . . . . . . . . . . . . . . . . . . . ..... 69
3.9.6 Construction of Minimal Polynomials
Using MATLAB. . . . . . . . . . . . . . . . . . . . . . ..... 69
3.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... 70

4 Linear Block Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73


4.1 Block Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.2 Linear Block Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.2.1 Linear Block Code Properties . . . . . . . . . . . . . . . . . . 75
4.2.2 Generator and Parity Check Matrices. . . . . . . . . . . . . 76
4.2.3 Weight Distribution of Linear Block Codes . . . . . . . . 78
4.2.4 Hamming Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.2.5 Syndrome Table Decoding . . . . . . . . . . . . . . . . . . . . 81
4.3 Cyclic Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.3.1 The Basic Properties of Cyclic Codes . . . . . . . . . . . . 83
4.3.2 Encoding Algorithm for an (n, k) Cyclic Codes . . . . . 84
4.3.3 Encoder for Cyclic Codes Using Shift Registers . . . . . 87
4.3.4 Shift Register Encoders for Cyclic Codes. . . . . . . . . . 89
4.3.5 Cyclic Redundancy Check Codes . . . . . . . . . . . . . . . 90
4.4 BCH Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.4.1 BCH Code Design . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.4.2 Berlekamp’s Algorithm for Binary BCH
Codes Decoding . . . . . . . . . . . . . . . . . . . . . . . .... 96
xiv Contents

4.4.3 Chien Search Algorithm . . . . . . . . . . . . . . . . . . . . . 97


4.5 Reed–Solomon Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.5.1 Reed–Solomon Encoder. . . . . . . . . . . . . . . . . . . . . . 102
4.5.2 Decoding of Reed–Solomon Codes . . . . . . . . . . . . . . 105
4.5.3 Binary Erasure Decoding . . . . . . . . . . . . . . . . . . . . . 114
4.5.4 Non-binary Erasure Decoding. . . . . . . . . . . . . . . . . . 115
4.6 Performance Analysis of RS Codes . . . . . . . . . . . . . . . . . . . . 118
4.6.1 BER Performance of RS Codes for BPSK
Modulation in AWGN and Rayleigh
Fading Channels . . . . . . . . . . . . . . . . . . . . . . . . ... 118
4.6.2 BER Performance of RS Codes for Non-coherent
BFSK Modulation in AWGN and Rayleigh
Fading Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.8 MATLAB Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

5 Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127


5.1 Structure of Non-systematic Convolutional Encoder . . . . . . . . 127
5.1.1 Impulse Response of Convolutional Codes. . . . . . . . . 129
5.1.2 Constraint Length . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.1.3 Convolutional Encoding Using MATLAB . . . . . . . . . 131
5.2 Structure of Systematic Convolutional Encoder. . . . . . . . . . . . 132
5.3 The Structural Properties of Convolutional Codes . . . . . . . . . . 132
5.3.1 State Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.3.2 Catastrophic Convolutional Codes. . . . . . . . . . . . . . . 133
5.3.3 Transfer Function of a Convolutional Encoder . . . . . . 134
5.3.4 Distance Properties of Convolutional Codes . . . . . . . . 139
5.3.5 Trellis Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
5.4 Punctured Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . 143
5.5 The Viterbi Decoding Algorithm. . . . . . . . . . . . . . . . . . . . . . 145
5.5.1 Hard-decision Decoding. . . . . . . . . . . . . . . . . . . . . . 147
5.5.2 Soft-decision Decoding . . . . . . . . . . . . . . . . . . . . . . 147
5.6 Performance Analysis of Convolutional Codes . . . . . . . . . . . . 151
5.6.1 Binary Symmetric Channel . . . . . . . . . . . . . . . . . . . 151
5.6.2 AWGN Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.6.3 Rayleigh Fading Channel. . . . . . . . . . . . . . . . . . . . . 155
5.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5.8 MATLAB Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Contents xv

6 Turbo Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 161


6.1 Non-recursive and Recursive Systematic
Convolutional Encoders . . . . . . . . . . . . . . . . . . . . . . . . . ... 161
6.1.1 Recursive Systematic Convolutional
(RSC) Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.2 Turbo Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
6.2.1 Different Types of Interleavers . . . . . . . . . . . . . . . . . 164
6.2.2 Turbo Coding Illustration. . . . . . . . . . . . . . . . . . . . . 165
6.2.3 Turbo Coding Using MATLAB . . . . . . . . . . . . . . . . 168
6.3 Turbo Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6.3.1 The BCJR Algorithm . . . . . . . . . . . . . . . . . . . . . . . 178
6.3.2 Turbo Decoding Illustration . . . . . . . . . . . . . . . . . . . 182
6.3.3 Convergence Behavior of the Turbo Codes . . . . . . . . 192
6.3.4 EXIT Analysis of Turbo Codes . . . . . . . . . . . . . . . . 192
6.4 Performance Analysis of the Turbo Codes . . . . . . . . . . . . . . . 195
6.4.1 Upper Bound for the Turbo Codes
in AWGN Channel . . . . . . . . . . . . . . . . . . . . . . ... 195
6.4.2 Upper Bound for Turbo Codes in Rayleigh
Fading Channel . . . . . . . . . . . . . . . . . . . . . . . . ... 197
6.4.3 Effect of Free Distance on the Performance
of the Turbo Codes . . . . . . . . . . . . . . . . . . . . . . ... 200
6.4.4 Effect of Number of Iterations on the Performance
of the Turbo Codes . . . . . . . . . . . . . . . . . . . . . . ... 203
6.4.5 Effect of Puncturing on the Performance
of the Turbo Codes . . . . . . . . . . . . . . . . . . . . . . . . . 204
6.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
6.6 MATLAB Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

7 Bandwidth Efficient Coded Modulation . . . . . . . . . . . . . . . . . . . . 209


7.1 Set Partitioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
7.2 Design of the TCM Scheme . . . . . . . . . . . . . . . . . . . . . . . . . 211
7.3 Decoding TCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
7.4 TCM Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 219
7.4.1 Asymptotic Coding Gain . . . . . . . . . . . . . . . . . . . . . 219
7.4.2 Bit Error Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
7.4.3 Simulation of the BER Performance of a 8-State
8-PSK TCM in the AWGN and Rayleigh
Fading Channels Using MATLAB . . . . . . . . . . . . . . 230
7.5 Turbo Trellis Coded Modulation (TTCM) . . . . . . . . . . . . . . . 232
7.5.1 TTCM Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
7.5.2 TTCM Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
xvi Contents

7.5.3 Simulation of the BER Performance


of the 8-State 8-PSK TTCM in AWGN
and Rayleigh Fading Channels . . . . . . . . . . . . . . . . . 234
7.6 Bit-interleaved Coded Modulation. . . . . . . . . . . . . . . . . . . . . 237
7.6.1 BICM Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
7.6.2 BICM Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
7.7 Bit-interleaved Coded Modulation Using Iterative Decoding. . . 239
7.7.1 BICM-ID Encoder and Decoder . . . . . . . . . . . . . . . . 240
7.7.2 Simulation of the BER Performance of 8-State
8-PSK BICM and BICM-ID in AWGN
and Rayleigh Fading Channels . . . . . . . . . . . . . . . . . 242
7.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Appendix A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250

8 Low Density Parity Check Codes . . . . . . . . . . . . . . . . . . . ..... 251


8.1 LDPC Code Properties . . . . . . . . . . . . . . . . . . . . . . . ..... 251
8.2 Construction of Parity Check Matrix H . . . . . . . . . . . . ..... 252
8.2.1 Gallager Method for Random Construction
of H for Regular Codes . . . . . . . . . . . . . . . . . ..... 252
8.2.2 Algebraic Construction of H for Regular Codes ..... 253
8.2.3 Random Construction of H for Irregular Codes. ..... 254
8.3 Representation of Parity Check Matrix Using
Tanner Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... 255
8.3.1 Cycles of Tanner Graph. . . . . . . . . . . . . . . . . ..... 256
8.3.2 Detection and Removal of Girth 4 of a Parity
Check Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
8.4 LDPC Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
8.4.1 Preprocessing Method . . . . . . . . . . . . . . . . . . . . . . . 260
8.5 Efficient Encoding of LDPC Codes. . . . . . . . . . . . . . . . . . . . 266
8.5.1 Efficient Encoding of LDPC Codes
Using MATLAB. . . . . . . . . . . . . . . . . . . . . . ..... 269
8.6 LDPC Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... 270
8.6.1 LDPC Decoding on Binary Erasure Channel
Using Message-Passing Algorithm . . . . . . . . . ..... 271
8.6.2 LDPC Decoding on Binary Erasure Channel
Using MATLAB. . . . . . . . . . . . . . . . . . . . . . . . . . . 274
8.6.3 Bit-Flipping Decoding Algorithm . . . . . . . . . . . . . . . 275
8.6.4 Bit-Flipping Decoding Using MATLAB . . . . . . . . . . 278
8.7 Sum–Product Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
8.7.1 Log Domain Sum–Product Algorithm (SPA) . . . . . . . 284
8.7.2 The Min-Sum Algorithm . . . . . . . . . . . . . . . . . . . . . 285
8.7.3 Sum–Product and Min-Sum Algorithms
for Decoding of Rate 1/2 LDPC Codes
Using MATLAB. . . . . . . . . . . . . . . . . . . . . . ..... 289
Contents xvii

8.8 EXIT Analysis of LDPC Codes . . . . . . . . . . . . . . . . . . . ... 291


8.8.1 Degree Distribution . . . . . . . . . . . . . . . . . . . . . . ... 291
8.8.2 Ensemble Decoding Thresholds . . . . . . . . . . . . . ... 293
8.8.3 EXIT Charts for Irregular LDPC Codes
in Binary Input AWGN Channels . . . . . . . . . . . . ... 294
8.9 Performance Analysis of LDPC Codes . . . . . . . . . . . . . . ... 296
8.9.1 Performance Comparison of Sum–Product
and Min-Sum Algorithms for Decoding
of Regular LDPC Codes in AWGN Channel . . . . ... 296
8.9.2 BER Performance Comparison of Regular
and Irregular LDPC Codes in AWGN Channel. . . ... 296
8.9.3 Effect of Block Length on the BER Performance
of LDPC Codes in AWGN Channel . . . . . . . . . . ... 297
8.9.4 Error Floor Comparison of Irregular LDPC Codes
of Different Degree Distribution
in AWGN Channel . . . . . . . . . . . . . . . . . . . . . . . . . 298
8.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
8.11 MATLAB Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302

9 LT and Raptor Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... 305


9.1 LT Codes Design . . . . . . . . . . . . . . . . . . . . . . . . . . ...... 305
9.1.1 LT Degree Distributions . . . . . . . . . . . . . . . ...... 306
9.1.2 Important Properties of the Robust Soliton
Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
9.1.3 LT Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
9.1.4 Tanner Graph of LT Codes . . . . . . . . . . . . . . . . . . . 310
9.1.5 LT Decoding with Hard Decision . . . . . . . . . . . . . . . 310
9.1.6 Hard-Decision LT Decoding Using MATLAB . . . . . . 312
9.1.7 BER Performance of LT Decoding
over BEC Using MATLAB . . . . . . . . . . . . . ...... 314
9.2 Systematic LT Codes . . . . . . . . . . . . . . . . . . . . . . . ...... 315
9.2.1 Systematic LT Codes Decoding . . . . . . . . . . ...... 316
9.2.2 BER Performance Analysis of Systematic
LT Codes Using MATLAB . . . . . . . . . . . . . . . . . . . 316
9.3 Raptor Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
9.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
9.5 MATLAB Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323

10 MIMO System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325


10.1 What Is MIMO? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
10.2 MIMO Channel Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
10.2.1 The Frequency Flat MIMO Channel . . . . . . . . . . . . . 326
xviii Contents

10.2.2 The Frequency-Selective MIMO Channel. . . . . . . . . . 327


10.2.3 MIMO-OFDM System . . . . . . . . . . . . . . . . . . . . . . 327
10.3 Channel Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
10.3.1 LS Channel Estimation . . . . . . . . . . . . . . . . . . . . . . 329
10.3.2 DFT-Based Channel Estimation . . . . . . . . . . . . . . . . 330
10.3.3 MIMO-OFDM Channel Estimation . . . . . . . . . . . . . . 330
10.3.4 Channel Estimation Using MATLAB . . . . . . . . . . . . 331
10.4 MIMO Channel Decomposition . . . . . . . . . . . . . . . . . . . . . . 333
10.5 MIMO Channel Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
10.5.1 Capacity of Deterministic MIMO Channel
When CSI Is Known to the Transmitter . . . ........ 335
10.5.2 Deterministic MIMO Channel Capacity
When CSI Is Unknown at the Transmitter . . . . . . . . . 337
10.5.3 Random MIMO Channel Capacity . . . . . . . . . . . . . . 338
10.6 MIMO Channel Equalization . . . . . . . . . . . . . . . . . . . . . . . . 348
10.6.1 Zero Forcing (ZF) Equalization . . . . . . . . . . . . . . . . 350
10.6.2 Minimum Mean Square Error (MMSE)
Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
10.6.3 Maximum Likelihood Equalization . . . . . . . . . . . . . . 350
10.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
10.8 MATLAB Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353

11 Space–Time Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355


11.1 Space–Time-Coded MIMO System . . . . . . . . . . . . . . . . . . . . 355
11.2 Space–Time Block Code (STBC) . . . . . . . . . . . . . . . . . . . . . 356
11.2.1 Rate Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
11.2.2 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
11.2.3 Diversity Criterion . . . . . . . . . . . . . . . . . . . . . . . . . 357
11.2.4 Performance Criteria . . . . . . . . . . . . . . . . . . . . . . . . 358
11.2.5 Decoding STBCs . . . . . . . . . . . . . . . . . . . . . . . . . . 359
11.3 Alamouti Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
11.3.1 2-Transmit, 1-Receive Alamouti STBC Coding. . . . . . 360
11.3.2 2-Transmit, 2-Receive Alamouti STBC Coding. . . . . . 361
11.3.3 Theoretical BER Performance of BPSK
Alamouti Codes Using MATLAB . . . . . . . . . . ..... 363
11.4 Higher-Order STBCs. . . . . . . . . . . . . . . . . . . . . . . . . ..... 364
11.4.1 3-Transmit, 4-Receive STBC Coding. . . . . . . . ..... 365
11.4.2 Simulation of BER Performance of STBCs
Using MATLAB. . . . . . . . . . . . . . . . . . . . . . ..... 369
11.5 Space–Time Trellis Coding . . . . . . . . . . . . . . . . . . . . ..... 372
11.5.1 Space–Time Trellis Encoder . . . . . . . . . . . . . . ..... 373
11.5.2 Simulation of BER Performance of 4-State
QPSK STTC Using MATLAB . . . . . . . . . . . . ..... 381
Contents xix

11.6 MIMO-OFDM Implementation . . . . . . . . . . . . . . . . . . . . . . . 387


11.6.1 Space–Time-Coded OFDM . . . . . . . . . . . . . . . . . . . 389
11.6.2 Space–Frequency-Coded OFDM . . . . . . . . . . . . . . . . 390
11.6.3 Space–Time–Frequency-Coded OFDM . . . . . . . . . . . 390
11.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
11.8 MATLAB Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
About the Author

K. Deergha Rao is director and professor in the Navigational Electronics Research


and Training Unit (NERTU), University College of Engineering, Osmania
University, Hyderabad, India. Earlier, he was a postdoctoral fellow and part-time
professor at the Department of Electronics and Communication Engineering,
Concordia University, Montreal, Canada. He has executed several research projects
for premium Indian organizations such as Defence Research and Development
Organization (DRDO), Hindustan Aeronautical Limited (HAL) and Bharat Elec-
tronics Limited (BEL). His teaching areas are digital signal processing, digital
image processing, coding theory for wireless channels and MIMO wireless com-
munications, whereas his research interests include GPS signal processing, wireless
channel coding, blind equalization, robust multiuser detection, OFDM UWB signal
processing, MIMO SFBC OFDM, image processing, cryptosystems and VLSI
signal processing. Professor Rao has presented papers at IEEE international con-
ferences several times in the U.S.A., Switzerland and Russia. He has more than 100
publications to his credit, including more than 60 publications in IEEE journals and
conference proceedings. He is a senior member of IEEE and has served as chairman
of communications and signal processing societies joint chapter of IEEE Hyderabad
section. He is currently a member of the IEEE SPS chapters committee. He was
awarded 2013 IETE K.S. Krishnan Memorial Award for the best system-oriented
paper. He has served as Communications Track Chair for IEEE INDICON 2011
held at Hyderabad. He is an editorial board member of the International Journal of
Sustainable Aviation(Inderscience Publishers, U.K.). He has coauthored a book,
Digital Signal Processing (Jaico Publishing House, India).

xxi
Chapter 1
Introduction

In this chapter, a digital communication system with coding is first described.


Second, various wireless communication channels, their probability density func-
tions, and capacities are discussed. Further, Shannon’s noisy channel coding the-
orem, channel coding principle, and channel coding gain are explained. Finally,
some application examples of channel coding are included.

1.1 Digital Communication System

A communication system is a means of conveying information from one user to


another user. The digital communication system is one in which the data are
transmitted in digital form. A digital communication system schematic diagram is
shown in Fig. 1.1. The source coding is used to remove redundancy from source
information for efficient transmission. The transmitted signal power and channel
bandwidth are the key parameters in the design of digital communication system.
Using these parameters, the signal energy per bit ðEb Þ to noise power spectral
density ðN0 Þ ratio is determined. This ratio is unique in determining the probability
of bit error, often referred to as bit error rate (BER). In practice, for a fixed Eb =N0 ,
acceptable BER is possible with channel coding. This can be achieved by adding
additional digits to the transmitted information stream. These additional digits do
not have any new information, but they make it possible for the receiver to detect
and correct errors thereby reducing the overall probability of error.

1.2 Wireless Communication Channels

1.2.1 Binary Erasure Channel (BEC)

Erasure is a special type of error with known location. The BEC transmits one of
the two binary bits 0 and 1. However, an erasure ‘e’ is produced when the receiver
receives an unreliable bit. The BEC channel output consists of 0, 1, and e as shown

© Springer India 2015 1


K. Deergha Rao, Channel Coding Techniques for Wireless Communications,
DOI 10.1007/978-81-322-2292-7_1
2 1 Introduction

Source Source Channel Modulator /


Symbols Encoder Encoder Transmitter

Physical
Channel

Source
symbols Source Channel Receiver /
Sink Decoder Decoder Demodulator

Fig. 1.1 Digital communication system with coding

Transmit Receive
0 0

1 1

Fig. 1.2 Binary erasure channel

in Fig. 1.2. The BEC erases a bit with probability ε, called the erasure probability of
the channel. Thus, the channel transition probabilities for the BEC are
9
Pðy ¼ 0jx ¼ 0Þ ¼ 1  e; >
>
>
>
Pðy ¼ ejx ¼ 0Þ ¼ e; >
>
>
>
>
=
Pðy ¼ 1jx ¼ 0Þ ¼ 0;
ð1:1Þ
Pðy ¼ 0jx ¼ 1Þ ¼ 0; >
>
>
>
Pðy ¼ ejx ¼ 1Þ ¼ e; >
>
>
>
>
Pðy ¼ 1jx ¼ 1Þ ¼ 1  e: ;

1.2.2 Binary Symmetric Channel (BSC)

The BSC is discrete memoryless channel that has binary symbols both in the input
and output. It is symmetric because the probability for receiving 0 when 1 is
transmitted is same as the probability for receiving 1 when 0 is transmitted. This
1.2 Wireless Communication Channels 3

0 0

1 1

Fig. 1.3 Binary symmetric channel

probability is called the crossover probability of the channel denoted by P as shown


in Fig. 1.3. The probability for no error, i.e., receiving the same as transmitted, is
1  P. Hence, the channel transition probabilities for the BSC are
9
Pðy ¼ 0jx ¼ 0Þ ¼ 1  P; >
>
>
Pðy ¼ 0jx ¼ 1Þ ¼ P; =
ð1:2Þ
Pðy ¼ 1jx ¼ 0Þ ¼ P; >
>
>
;
Pðy ¼ 1jx ¼ 1Þ ¼ 1  P;

1.2.3 Additive White Gaussian Noise Channel

In an AWGN channel, the signal is degraded by white noise g which has a constant
spectral density and a Gaussian distribution of amplitude. The Gaussian distribution
has a probability density function (pdf) given by
 
1 g2
Pdf ðgÞ ¼ pffiffiffiffiffiffiffiffiffiffi exp  2 ð1:3Þ
2pr2 2r

where r2 is the variance of a Gaussian random process.

1.2.4 Gilbert–Elliott Channel

For bursty wireless channels, the Gilbert–Elliott (GE) channel [1, 2] is one of the
simplest and practical models. The GE channel is a discrete-time stationary model
as shown in Fig. 1.4 with two states: one bad state or burst state ‘2’ wherein a BSC
resides with high error probabilities ð1  P2 Þ and the other state is a good state ‘1’
wherein a BSC resides with low error probabilities ð1  P1 Þ.
4 1 Introduction

1 2

0 0 0 0

1 1 1 1
Good Channel Bad Channel

Fig. 1.4 A two-state channel

Another common GE example is that the BEC resides in a bad state with e close
to unity and assigns erasures to all of the bits transmitted during the high-error-rate
(bad) state.

1.2.5 Fading Channel

In the radio channel, the received power is affected by the attenuations due to the
combinations of the following effects:
1. The Path loss: It is the signal attenuation. The power received by the receiving
antenna decreases when the distance between transmitter and receiver increases.
The power attenuation is proportional to (distance)α, where α values range from
2 to 4. When the distance varies with time, the path loss also varies.
2. The Shadowing loss: It is due to the absorption of the radiated signal by scat-
tering structure. It is derived from a random variable with lognormal
distribution.
3. The Fading loss: The combination of multipath propagation and the Doppler
frequency shift produces the random fluctuations in the received power which
gives the fading losses.
1.2 Wireless Communication Channels 5

1.2.6 Fading

Fading gives the variations of the received power along with the time. It is due to
the combination of multipath propagation and the Doppler frequency shift which
gives the time-varying attenuations and delays that may degrade the communication
system performance. The received signal is a distorted version of the transmitted
signal which is a sum of the signal components from the various paths with different
delays due to multipath and motion.
Let Ts be the duration of a transmitted signal and Bx be the signal bandwidth.
The fading channel can be classified based on coherence time and coherence
bandwidth of the channel. The coherence time and coherence bandwidth of a
channel are defined as follows:
Doppler spread: The significant changes in the channel occur in a time Tc whose
order of magnitude is the inverse of the maximum Doppler shift BD among the
various paths, called the Doppler spread of the channel.
The coherence time of the channel Tc is

1
Tc , ð1:4Þ
BD

Delay spread: The maximum among the path delay differences, a significant
change occurs when the frequency change exceeds the inverse of TD , called the
delay spread of the channel.
The coherence bandwidth of the channel Bc is as follows:

1
Bc , ð1:5Þ
TD

The classification fading channels is shown in Fig. 1.5.


The fast fading causes short burst errors which are easy to correct. The slow fading
will affect many successive symbols leading to long burst errors. Due to energy
absorption and scattering in physical channel propagation media, the transmitted
signal is attenuated and becomes noisy. The attenuation will vary in mobile com-
munications based on the vehicle speed, surrounding trees, buildings, mountains, and

Fading Channel Classification

s c s c x c x c

Slow fading Fast fading Frequency flat Frequency selective

Fig. 1.5 Classification of fading channels


6 1 Introduction

terrain. Based on the receiver location, moving receiver signals interfere with one
another and take several different paths. As such, the wireless channels are called
multipath fading channels. Hence, the additive white Gaussian noise (AWGN)
assumption for wireless channels is not realistic. Thus, the amplitudes in wireless
channel are often modeled using Rayleigh or Rician probability density function.
The most common fading channel models are as follows:
1. Flat independent fading channel
2. Block fading channel
In flat independent fading channel, the attenuation remains constant for one
symbol period and varies from symbol to symbol. Whereas in block fading channel,
the attenuation is constant over a block of symbols and varies from block to block.

1.3 Statistical Models for Fading Channels

1.3.1 Probability Density Function of Rician Fading Channel

When the received signal is made up of multiple reflective rays plus a significant
line of sight (non-faded) component, the received envelope amplitude has a Rician
probability density function (PDF) as given in Eq. (1.6), and the fading is referred
to as Rician fading.
   
x ðx2 þ A2 Þ xA
Pdf ð xÞ ¼ 2 exp  I0 2 for x  0; A  0
r 2r2 r ð1:6Þ
¼0 otherwise

where x is the amplitude of the received faded signal, I0 is the zero order modified
Bessel function of the first kind, and A denotes the peak magnitude of the non-faded
signal component called the specular component. The Rician PDF for different
values of sigma and A ¼ 1 is shown in Fig. 1.6.

1.3.2 Probability Density Function of Rayleigh Fading


Channel

Rayleigh fading occurs when there are multiple indirect paths between the transmitter
and the receiver and no direct non-fading or line of sight (LOS) path. It represents the
worst case scenario for the transmission channel. Rayleigh fading assumes that a
received multipath signal consists of a large number of reflected waves with
independent and identically distributed phase and amplitude. The envelope of the
received carrier signal is Rayleigh distributed in wireless communications [3].
1.3 Statistical Models for Fading Channels 7

1.8
sigma=0.25
1.6 sigma=0.6
sigma=1
1.4
probability density

1.2

0.8

0.6

0.4

0.2

0
0 1 2 3 4 5 6 7 8
x

Fig. 1.6 Probability density of Rician fading channel

As the magnitude of the specular component approaches zero, the Rician PDF
approaches a Rayleigh PDF expressed as follows:
 
x x2
Pdf ð xÞ ¼ exp  for x  0
r2 2r2 ð1:7Þ
¼0 otherwise

The Rayleigh PDF for different values of sigma is shown in Fig. 1.7.
Additive white Gaussian noise and Rician channels provide fairly good per-
formance corresponding to an open country environment, while Rayleigh channel,
which best describes the urban environment fading, provides relatively worse
performance.

1.3.3 Probability Density Function of Nakagami Fading


Channel

The Nakagami model is another very popular empirical fading model [4]

2  m m 2m1 m r22
Pdf ðr Þ ¼ r e 2r ð1:8Þ
CðmÞ 2r2
8 1 Introduction

3.5
sigma=0.2
sigma=0.6
3
sigma=1.0

2.5
Probability density

1.5

0.5

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x

Fig. 1.7 Probability density of Rayleigh fading channel

where r2 ¼ 12 E½r 2 , Cð:Þ is the gamma function, m  12 is the fading figure.


The received instantaneous power r 2 satisfies a gamma distribution. The phase
of the signal is uniformly distributed in [0, 2π). The Nakagami distribution is a
general model obtained from experimental data fitting, and its shape is very similar
to that of the Rice distribution. The shape parameter ‘m’ measures the severity of
fading.
When
m ¼ 1, it is Rayleigh fading.
m ! 1, it is AWGN channel; that is, there is no fading.
m [ 1, it is close to Rician fading.
However, due to lack of the physical basis, the Nakagami distribution is not as
popular as the Rician and Rayleigh fading models in mobile communications.
Many other fading channel models are discussed in Kuhn [5].

1.4 Channel Capacity

Channel capacity can be defined as the maximum rate at which the information can
be transmitted over a reliable channel.
1.4 Channel Capacity 9

Transmission rate Rs H
Spectral or Bandwidth Efficiency = ¼ bits/s/Hz
Channel Band width B
ð1:9Þ

where Rs is the symbol rate, and H is the entropy.


The channel capacity is also known as Shanon’s capacity can be defined as the
average mutual information for a channel with energy constraint.

1.4.1 Channel Capacity of Binary Erasure Channel

The channel capacity of BEC is

CBEC ¼ 1  e ð1:10Þ

e is the probability of a bit erasure, which is represented by the symbol e.

1.4.2 Channel Capacity of Binary Symmetric Channel

The Channel capacity of BSC is as follows:

CBSC ¼ 1  HðPÞ ð1:11Þ

HðPÞ is the binary entropy function given by Ryan and Lin [6]

HðPÞ ¼ P log2 ðPÞ  ð1  PÞ log2 ð1  PÞ ð1:12Þ

P is the probability of a bit error.

1.4.3 Capacity of AWGN Channel

An AWGN channel can be expressed by the following input–output relationship

y¼xþg ð1:13Þ

where x is the transmitted source signal, y denotes the output of the channel, and g
is a real Gaussian process with zero mean, variance r2g ¼ E½g2 , and two sided
power spectral density N20 . The mutual information I ðx; yÞ with constraint on the
energy of the input signal can be expressed as follows:
10 1 Introduction

I ðx; yÞ ¼ Hð yÞ  HðgÞ ð1:14Þ

where Hð yÞ is the entropy of the channel output, and HðgÞ is the entropy of the
AWGN. Since r2y ¼ r2x þ r2g , the entropy Hð yÞ is bounded by 12 log2 peðr2x þ r2g Þ
and thus

1 1
I ðx; yÞ  log2 peðr2x þ r2g Þ  log2 per2g
2 2
1 r2x ð1:15Þ
¼ log2 ð1 þ 2 Þ
2 rg

The mutual information I ðx; yÞ is maximum when the input x is a real Gaussian
process with zero mean and variance r2x . The capacity of the channel is the max-
imum information that can be transmitted from x to y by varying the PDF Pdf of the
transmit signal x. The signal-to-noise ratio (SNR) is defined by

r2x
SNR , ð1:16Þ
r2g

Thus, the capacity of an AWGN channel is given by

1
C ¼ log2 ð1 þ SNRÞ bits/s/Hz ð1:17Þ
2

Since r2x ¼ BEs and r2g ¼ B N20 , Eq. (1.17) can be rewritten as follows:
 
1 Es
C ¼ log2 1 þ 2 bits/s/Hz ð1:18Þ
2 N0

where B is the bandwidth, Es denotes the symbol energy, and N0 represents the
noise spectral density.
If x and g are independent complex Gaussian processes, the channel capacity can
be expressed as follows:

C ¼ log2 ð1 þ SNRÞ bits/s/Hz ð1:19Þ

Since r2x ¼ BEs and r2g ¼ BN0 for complex Gaussian process, Eq. (1.19) can be
rewritten as follows:
 
Es
C ¼ log2 1þ bits/s/Hz ð1:20Þ
N0
1.4 Channel Capacity 11

Example 1.1 What is the capacity of a channel with an SNR of 20 dB.


Solution C ¼ log2 ð1 þ 20Þ ¼ 6:65 bits/s/Hz.

The capacity is increasing as a log function of the SNR, which is a slow increase.
Clearly, increasing the capacity by any significant factor takes an enormous amount
of power.

1.4.4 Channel Capacity of Gilbert–Elliott Channels

The channel capacity of GE Channel is given by Ryan and Lin [6]

X
S
CGE ¼ Ps Cs ð1:21Þ
s¼1

where Ps is the probability of being state in s state, and Cs is the capacity of the
channel in s state.

1.4.5 Ergodic Capacity of Fading Channels

A slow flat fading channel with AWGN can be expressed by the following input–
output relationship

y ¼ hx þ g ð1:22Þ

where x is the transmitted source signal, y denotes the output of the channel, g is the
AWGN, and h is a Gaussian random variable with Rician or Rayleigh PDF.
The fading channel model given in Eq. (1.22) can be seen as a Gaussian channel
with attenuation h. If h is assumed to be an ergodic process, the capacity of the
fading channel is the Ergodic capacity computed by the following expression
 
C ¼ E½log2 1 þ h2 SNR  bits/s/Hz ð1:23Þ

where the expectation E[·] is with respect to random variable h. If E ½h2  ¼ 1,


Eq. (1.23) is always less than AWGN channel capacity since E ½f ðXÞ  f ðE½ X Þ
according to Jensen inequality. If h has Rayleigh PDF, computation of Eq. (1.24)
yields [5]
   
1 1
C ¼ log2 e  exp  expint bits/s/Hz ð1:24Þ
SNR SNR
12 1 Introduction

where
Z1
et
expint ð xÞ , dt
t
x

which is the capacity of the independent Rayleigh fading channel with no constraint
on the constellation of the input signal. The following MATLAB program is written
and used to compute the AWGN channel capacity in AWGN and the ergodic
capacity of a Rayleigh fading channel.
Program 1.1: MATLAB program to compute capacity of AWGN channel and
ergodic capacity of Rayleigh fading channel with channel state information (CSI).

% capacity of AWGN channel and ergodic capacity of Rayleigh fading


channel with %channel state information (CSI).
clear all
close all
SNRdB = [-10:0.1:30];
SNRlin = 10.^(SNRdB/10);
C_AWGN = log2 (1 + SNRlin);% AWGN
C_Rayleigh = log2(exp(1)) * exp (1 ./ SNRlin ) .* expint( 1 ./ SNRlin);%
Rayleigh
plot(SNRdB, C_AWGN, '-', SNRdB, C_Rayleigh, '--');
xlabel(' SNR(dB)'), ylabel('{\it Capacity} (bit/s/Hz)');
legend('AWGN', 'Rayleigh fading' );

The SNR versus capacity plot obtained from the above MATLAB program is
shown in Fig. 1.8. From Fig. 1.8, it can be observed that there is a much lower
performance difference between the capacities of AWGN and Rayleigh channels.
This is highly indicative that the coding of fading channels will yield considerable
coding gain for large SNR.
Example 1.2 For large SNR’s, verify that the SNR required to obtain the same
ergodic capacity for the AWGN channel and the independent Rayleigh fading
channel differs by 2.5 dB.
Solution AWGN channel capacity is given by C ¼ log2 ð1 þ SNRÞ bits/s/Hz.
For large SNRs, the above equation can be approximated as follows:

C ¼ log2 ðSNRÞ bits/s/Hz

The ergodic capacity in Rayleigh fading channel is given by


   
1 1
C ¼ log2 e  exp  expint
SNR SNR
1.4 Channel Capacity 13

10
AWGN
9 Rayleigh fading

7
Capacity (bit/s/Hz)

0
-10 -5 0 5 10 15 20 25 30
SNR (dB)

Fig. 1.8 Capacity of AWGN channel and ergodic capacity of independent Rayleigh fading
channel

For large SNRs, the above equation can be rewritten as follows:

CRayleigh ¼ log2 ðSNRÞ  0:8327

Since the capacity of an AWGN channel for large SNRs can be approximated as
log2 ðSNRÞ, the above relation can be rewritten as follows:

CRayleigh ¼ CAWGN  0:8327

Thus, the capacity for AWGN channel and the Rayleigh fading channel differs
by 0.8327. The difference in dB can be expressed as follows:
 
10 log10 20:8327 ¼ 2:5 dB

1.4.6 Outage Probability of a Fading Channel

A mobile user will experience rapid changes in SNR as fading channels lead to an
oscillating SNR at different locations. As such, the channel can be characterized by
an average SNR and BER can be computed by using this. If BER is below a
14 1 Introduction

threshold, then it is not the primary concern for many applications. A more
meaningful measure is outage probability, which is the percentage of time that an
acceptable quality of communication is not available.
The outage probability of a fading channel is the probability with which the
information outage occurs when the transmission rate exceeds the capacity.
The outage probability for a Rayleigh fading channel with the same SNR as that
of AWGN is given by Du and Swamy [3]
 
1  2Cout
Pout ¼ 1  exp ð1:25Þ
SNR

1.4.7 Outage Capacity of Fading Channels

The outage capacity of a fading channel is the maximum rate supported by the
channel for a given outage probability of the channel. The Cout can be expressed as
follows:

Cout ¼ log2 ð1  SNR logð1  Pout ÞÞ ð1:26Þ

The following MATLAB program is written and used to compute outage


capacity of Rayleigh fading channels for different outage probabilities.
Program 1.2: MATLAB program to compute outage capacities of the Rayleigh
fading channel

% outage capacities of Rayleigh fading channel


clear all
close all
SNRdB = [-10:0.1:30];
SNRlin = 10.^(SNRdB/10);
C_AWGN = log2 (1 + SNRlin );% AWGN
C_Rayleigh = log2(exp(1)) * exp (1./ SNRlin ) .* expint( 1./ SNRlin);%
Rayleigh
P_out = 25e-2;
C_out_25 = log2(1 - SNRlin * log(1-P_out) );
P_out = 68e-2;
C_out_68 = log2(1 - SNRlin * log(1-P_out) );
plot(SNRdB, C_AWGN, '-', SNRdB, C_Rayleigh, '--', SNRdB,
C_out_68, ':',SNRdB, C_out_25, ':');
xlabel('{\itE}_s/{\itN}_0 (dB)'), ylabel('{\itC} (bit/s/Hz)');
legend('AWGN', 'Rayleigh fading','{\itC}_{out}')%, 'C_{out}, 4%',
'C_{out}, 46%', 'C_{out}, 64%' );
1.4 Channel Capacity 15

12
AWGN
Rayleigh fading
Cout
10

8
Capacity (bit/s/Hz)

2 Prout=68% Prout=25%

0
-5 0 5 10 15 20 25 30
SNR (dB)

Fig. 1.9 Outage capacities of Rayleigh fading channel

The outage capacity of Rayleigh fading channel for different outage probabilities
obtained from the above program is shown in Fig. 1.9.
It is observed from Fig. 1.6 that at pout ¼ 68 % ; Cout is greater than the capacity
of AWGN channel.

1.4.8 Capacity of Fading Channels with CSI


at the Transmitter and Receiver

The ergodic capacity of a Rayleigh fading channel with channel state information
(CSI) at the transmitter and receiver is given by Goldsmith [7]

Z1  
c
C¼ Blog2 Pdf ðcÞdc bits/s/Hz ð1:27Þ
c0
c0

where c is the signal-to-noise ratio (SNR), c0 is the cutoff SNR, Pdf ðcÞ is the PDF of
c due to the fading channel.
16 1 Introduction

1.5 Channel Coding for Improving the Performance


of Communication System

1.5.1 Shannon’s Noisy Channel Coding Theorem

Any channel affected by noise possesses a specific ‘channel capacity’ C, a rate of


conveying information that can never be exceeded without error, but in principle, an
error-correcting code always exists such that information can be transmitted at rates
less than C with an arbitrarily low BER.

1.5.2 Channel Coding Principle

The channel coding principle is to add redundancy to minimize error rate as


illustrated in Fig. 1.10.

1.5.3 Channel Coding Gain

The BER is the probability that a binary digit transmitted from the source received
erroneously by the user. For required BER, the difference between the powers
required for without and with coding is called the coding gain. A typical plot of
BER versus Eb =N0 (bit energy to noise spectral density ratio) with and without
channel coding is shown in Fig. 1.11. It can be seen that coding can arrive at the
same value of the BER at lower Eb =N0 than without coding. Thus, the channel
coding yields coding gain which is usually measured in dB. Also, the coding gain
usually increases with a decrease in BER.

Transmitter Receiver

Sink
Source Channel 1101
1 1 0 1

10 0 1 1 1 01

Channel 1 1 0 1 1 1 0 0
Channel
Encoder Decoder

Fig. 1.10 Illustration of channel coding principle


1.6 Some Application Examples of Channel Coding 17

0
10

-1
10

-2
10
Coded Uncoded
BER

-3
10

-4
10

-5 Coding gain
10
at BER = 10 -4

-6
10
Eb/N0 (dB)

Fig. 1.11 Illustration of coding gain

1.6 Some Application Examples of Channel Coding

1.6.1 Error Correction Coding in GSM

Each speech sample of 20 ms duration is encoded by RPE-LPC as 260 bits with


total bit rate of 13 kbps. The 260 bits are classified into three types based on their
sensitiveness as shown in Fig. 1.12.

Fig. 1.12 Classification of


speech sample 3 in GSM

Type Ia Type Ib Type II

50 bits 132 bits 78 bits


3 bit parity Unprotected codes

182 bits
convolutional coding

260 bits
18 1 Introduction

The 50 bits in Type Ia are the most sensitive to bit errors, the next 132 bits in
Type Ib are moderately sensitive to bit errors, and the other 78 bits in Type II do not
need any protection. The Type Ia bits are encoded using a cyclic encoder. The Type
Ib bits and the encoded Type Ia bits are encoded using convolutional encoder. The
Type II bits are finally added to the convolution encoder output bits.

1.6.2 Error Correction Coding in W-CDMA

The W-CDMA standard has defined two error correction coding schemes as shown
in Fig. 1.13 for different quality of services. The W-CDMA standard uses convo-
lutional encoding for voice and MPEG4 applications and uses turbo encoding for
data applications with longer time delays. The convolutional encoding gives a BER
of up to 10−3, and turbo encoding yields a BER of up to 10−6 with computational
complexity. In Fig. 1.13:
CRC = cyclic redundancy check
DAC = digital-to-analog convertor
NCO = numerically controlled oscillator
OVSF = orthogonal variable spreading factor
RRC = root raised cosine

1.6.3 Digital Video Broadcasting Channel Coding

Convolutional codes concatenated with a Reed-Solomon (RS) code are adopted as


physical layer FEC codes in digital video broadcast terrestrial/handheld (DVB-T/H).

FEC Encoder
OVSF Code RRC
Generator Filter
Interpolation X
Convolutional
Data
Encoder
CRC Block Baseband
X X Modulator NCO
Interleaver Transmit
Filter
DAC
Turbo
Encoder

Scrambling RRC Interpolation


Code Filter X

Fig. 1.13 Error correction coding in W-CDMA


1.6 Some Application Examples of Channel Coding 19

10.23
Mcps

SV codes, 10.23 carrier


MHz chip rate, Q
10230 period
Invert Quadri-
Parity phase
Modula
Navigation CRC tor
Rate
Data Block ½ data
Coder k=7
I
(150, Conv.
126) code

50 bps 100
Pairs of
bps
24 bit words

Provides 24 parity bits 10-bit 1 kcps


Neuman
Hoffman
code

Fig. 1.14 Error correction coding in GPS L5 signal (copy right: 1999 ION)

Turbo codes are used in digital video broadcast satellite services to handhelds/
terrestrial (DVB-SH). Low-density parity-check (LDPC) codes concatenated with a
Bose-Chaudhuri-Hochquenghem (BCH) code are adopted as physical layer FEC in
digital video broadcast second generation satellite (DVB-S2) and digital video
broadcast second generation terrestrial (DVB-T2).

1.6.4 Error Correction Coding in GPS L5 Signal

A block diagram of the simplified GPS satellite L5 signal generator [8] is shown in
Fig. 1.14. The navigation data is coded in a CRC block coder with a long block of
150 bits or 3 s at 50 bps and provides a 24-bit parity check word for low probability
of undetected error. This bit stream is then rate ½ coded using a K = 7 convolu-
tional coder for error correction with a soft decision Viterbi decoder in the receiver.
This FEC decoding provides approximately 5 dB of coding gain.
20 1 Introduction

References

1. Gilbert, E.N.: Capacity of a burst noise channel. Bell Syst. Tech. J. 39, 1253–1266 (1960)
2. Elliott, E.O.: Estimates for error rates for codes on burst noise channels. Bell Syst. Tech. J. 42,
1977–1997 (1963)
3. Du, K.L., Swamy, M.N.S.: Wireless Communications: Communication Systems from RF
Subsystems to 4G Enabling Technologies, University Press, Cambridge (2010)
4. Nakagami, M.: The m-distribution: a general formula of intensity distribution of rapid fading.
In: Hofffman, W.C. (ed.), Statistical Methods in Radio Wave Propagation. Oxford Pergamon
Press, pp. 3–36 (1960)
5. Kuhn, V.: Wireless Communications Over MIMO Channels: Applications to CDMA and
Multiple Antenna Systems. Wiley, Chichester (2006)
6. Ryan, WE., Lin, S.: Channel Codes Classical and Modern. Cambridge University Press, New
York (2009)
7. Goldsmith, A.: Wireless Communications. Cambridge University Press, Cambridge (2005)
8. Spilker, J.J., Van Dierendonck, A.J.: Proposed New Civil GPS Signal at 1176.45 MHz. In: ION
GPS’99, 4–17 Sept. 1999. Nashville, TN
Chapter 2
Performance of Digital Communication
Over Fading Channels

In this chapter, bit error rate (BER) performance of some of digital modulation
schemes and different wireless communication techniques is evaluated in additive
white Gaussian noise (AWGN) and fading channels. Further, the BER performance
of different diversity techniques such as selective diversity, EGC, and MRC is also
evaluated in Rayleigh fading channel.

2.1 BER Performance of Different Modulation Schemes


in AWGN, Rayleigh, and Rician Fading Channels

In this section, the effect of fading is evaluated on different modulation schemes.


The bit error probability Pb often referred to as BER is a better performance
measure to evaluate a modulation scheme. The BER performance of any digital
modulation scheme in a slow flat fading channel can be evaluated by the following
integral

Z1
Pb ¼ Pb; AWGN ðcÞPdf ðcÞdc ð2:1Þ
0

where Pb; AWGN ðcÞ is the probability of error of a particular modulation scheme in
AWGN channel at a specific signal-to-noise ratio c ¼ h2 NEb0 . Here, the random
variable h is the channel gain, NEb0 is the ratio of bit energy to noise power density in a
non-fading AWGN channel, the random variable h2 represents the instantaneous
power of the fading channel, and Pdf ðcÞ is the probability density function of c due
to the fading channel.

Electronic supplementary material The online version of this chapter (doi:10.1007/978-81-


322-2292-7_2) contains supplementary material, which is available to authorized users.

© Springer India 2015 21


K. Deergha Rao, Channel Coding Techniques for Wireless Communications,
DOI 10.1007/978-81-322-2292-7_2
22 2 Performance of Digital Communication Over Fading Channels

2.1.1 BER of BPSK Modulation in AWGN Channel

It is known that the BER for M-PSK in AWGN channel is given by [1]

X
maxðM=4;1Þ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 
2 2Eb log2 M ð2k  1Þp
BERMPSK ¼ Q sin ð2:2Þ
maxðlog2 M; 2Þ k¼1
N0 M

For coherent detection of BPSK, Eq. (2.2) with M ¼ 2 reduces to


rffiffiffiffiffiffiffiffi
2Eb
BERBPSK ¼Q ð2:3Þ
N0

where

Z1  2
1 y
Qð xÞ ¼ pffiffiffiffiffiffi exp  dy
2p 2
x

Equation (2.3) can be rewritten as


rffiffiffiffiffiffi
1 Eb
BERBPSK; AWGN ¼ erfc ð2:4Þ
2 N0
Eb
where erfc is the complementary error function and N0 is the bit energy-to-noise
ratio. The erfc can be related to the Q function as
 
1 x
Qð xÞ ¼ erfc pffiffiffi ð2:5Þ
2 2

For large Eb
N0 and M [ 4, the BER expression can be simplified as
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 
2 2Eb log2 M p
BERMPSK ¼ Q sin ð2:6Þ
log2 M N0 M

2.1.2 BER of BPSK Modulation in Rayleigh Fading Channel

For Rayleigh fading channels, h is Rayleigh distributed, h2 has chi-square distri-


bution with two degrees of freedom. Hence,
2.1 BER Performance of Different Modulation Schemes … 23

 
1 c
Pdf ðcÞ ¼ exp  ð2:7Þ
c c

where c ¼ NEb0 E½h2  is the average signal-to-noise ratio. For E ½h2  ¼ 1; c corresponds
to the average NEb0 for the fading channel.
By using Eqs. (2.1) and (2.3), the BER for a slowly Rayleigh fading channel
with BPSK modulation can be expressed as [2, 3]
 rffiffiffiffiffiffiffiffiffiffiffi
1 c
BERBPSK; Rayleigh ¼ 1 ð2:8Þ
2 1 þ c

For E½h2  ¼ 1; Eq. (2.8) can be rewritten as


0 vffiffiffiffiffiffiffiffiffiffiffiffi1
u Eb
1@ u N
BERBPSK; Rayleigh ¼ 1  t 0 Eb A ð2:9Þ
2 1þN 0

2.1.3 BER of BPSK Modulation in Rician Fading Channel

The error probability estimates for linear BPSK signaling in Rician fading channels
are well documented in [4] and is given as
" rffiffiffiffiffiffiffiffiffiffiffi#  
1 d a2 þ b2
Pb; Rician ¼ Q1 ða; bÞ  1 þ exp  I0 ðabÞ ð2:10Þ
2 dþ1 2

where
2sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi3 2sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi3
Kr2 1 þ 2d  2 dðd þ 1Þ Kr2 1 þ 2d þ 2 d ðd þ 1Þ
a¼4 5; b ¼ 4 5
2ðd þ 1Þ 2ðd þ 1Þ

a2 Eb
Kr ¼ 2
; d ¼ r2 :
2r N0

The parameter Kr is the Rician factor. The Q1 ða; bÞ is the Marcum Q function
defined [2] as
 2  1 
a þ b2 X a l
Q1 ða; bÞ ¼ exp  I0 ðabÞ; ba[o
2 l¼0
b ð2:11Þ
Q1 ða; bÞ ¼ Qðb  aÞ; b  1 and b  b  a
24 2 Performance of Digital Communication Over Fading Channels

The following MATLAB program is used to illustrate the BER performance of


BPSK in AWGN, Rayleigh, and Rician fading channels.

Program 2.1 Program for computing the BER for BPSK modulation in AWGN,
Rayleigh, and Rician fading channels

The BER performance resulted from the above MATLAB program for BPSK in
the AWGN, Rayleigh, and Rician (K = 5) channels is depicted in Fig. 2.1.
From Fig. 2.1, for instance, we can see that to obtain a BER of 10−4, using
BPSK, an AWGN channel requires NEb0 of 8.35 dB, Rician channel requires NEb0 of
20.5 dB, and a Rayleigh channel requires NEb0 of 34 dB. It is clearly indicative of the
large performance difference between AWGN channel and fading channels.

2.1.4 BER Performance of BFSK in AWGN, Rayleigh,


and Rician Fading Channels

In BPSK, the receiver provides coherent phase reference to demodulate the received
signal, whereas the certain applications use non-coherent formats avoiding a phase
reference. This type of non-coherent format is known as binary frequency-shift
keying (BFSK).
The BER for non-coherent BFSK in slow flat fading Rician channel is expressed
as [3]
2.1 BER Performance of Different Modulation Schemes … 25

0
10
AWGN channel
Rayleigh channel
-1 Rician channel
10

-2
10
Bit Error Rate

-3
10

-4
10

0 5 10 15 20 25 30 35
Eb/No, dB

Fig. 2.1 BER performance of BPSK in AWGN, Rayleigh, and Rician fading channels

 
1 þ Kr Kr c
Pb; BFSKðRicÞ ¼ exp  ð2:12Þ
2 þ 2Kr þ c 2 þ 2Kr þ c

where Kr is the power ratio between the LOS path and non-LOS paths in the Rician
fading channel.
Substituting Kr ¼ 1 in Eq. (2.8), the BER in AWGN channel for non-coherent
BFSK can be expressed as
 
1 Eb
Pb; AWGN ¼ exp  ð2:13Þ
2 2N0

whereas substitution of Kr ¼ 0 leads to the following BER expression for slow flat
Rayleigh fading channels using non-coherent BFSK modulation

1
Pb; BFSKðRayÞ ¼ ð2:14Þ
2 þ c

The following MATLAB program is used to illustrate the BER performance of


non-coherent BFSK modulation in AWGN, Rayleigh, and Rician fading channels.
26 2 Performance of Digital Communication Over Fading Channels

Program 2.2 Program for computing the BER for BFSK modulation in AWGN,
Rayleigh and Rician fading channels

The BER performance resulted from the MATLAB program 2.2 for non-coherent
BFSK in the AWGN, Rayleigh, and Rician (K = 5) channels is depicted in Fig. 2.2.

2.1.5 Comparison of BER Performance of BPSK, QPSK,


and 16-QAM in AWGN and Rayleigh Fading Channels

The BER of gray-coded M-QAM in AWGN channel can be more accurately


computed by [5]
pffiffiffi
 XM sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi!
2
4 1 3log2 MEb
BER16QAM; AWGN  1  pffiffiffiffiffi Q ð2:15Þ
log2 M M i¼1
ðM  1ÞN0

In Rayleigh fading, the average BER for M-QAM is given by [6]


pffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi!
  2M
2 1 X 1:5ð2i  1Þ2c log2 M
BERMQAM; AWGN  1  pffiffiffiffiffi 1
log2 M M i¼1 M  1 þ 1:5ð2i  1Þ2c log2 M
ð2:16Þ
2.1 BER Performance of Different Modulation Schemes … 27

The following MATLAB program 2.3 is used to compute theoretic BER per-
formance of 4-QAM, 8-QAM, and 16-QAM modulations in AWGN and Rayleigh
fading channels.

Program 2.3 Program for computing theoretic BER for 4-QAM, 8-QAM and 16-
QAM modulations in AWGN and Rayleigh fading channels

The BER performance obtained from the above program is depicted in Fig. 2.3.
28 2 Performance of Digital Communication Over Fading Channels

AWGN channel
-1
10 Rayleigh channel
Rician channel

-2
10

-3
Bit Error Rate

10

-4
10

-5
10

-6
10
0 5 10 15 20 25 30 35
Eb/No, dB

Fig. 2.2 BER performance of BFSK in AWGN, Rayleigh, and Rician fading channels

2.2 Wireless Communication Techniques

The most known wireless communication techniques are:


Direct sequence code division multiple access (DS-CDMA)
Frequency hopping CDMA (FH-CDMA)
Orthogonal frequency division multiplexing (OFDM)
Multicarrier CDMA (MC-CDMA)

2.2.1 DS-CDMA

In code division multiple access (CDMA) systems, the narrow band message signal
is multiplied by a very high bandwidth signal, which has a high chip rate, i.e., it
accommodates more number of bits in a single bit of message signal. The signal
with a high chip rate is called as spreading signal. All users in the CDMA system
use the same carrier frequency and transmit simultaneously. The spreading signal or
pseudo-noise code must be random so that no other user could be recognized.
2.2 Wireless Communication Techniques 29

0
10

-1
10

-2
10

-3
10

-4
BER

10

-5
10

16 -QAM Rayleigh
-6
10 8-QAM Rayleigh
4-QAM Rayleigh
-7 16QAM AWGN
10
8QAM AWGN
-8
4-QAM AWGN
10
0 5 10 15 20 25 30 35
Eb/No, dB

Fig. 2.3 BER performances of 4-QAM, 8-QAM, and 16-QAM in AWGN and Rayleigh fading
channels

The intended receiver works with same PN code which is used by the corre-
sponding transmitter, and time correlation operation detects the specific desired
codeword only and all other code words appear as noise. Each user operates
independently with no knowledge of the other users.
The near-far problem occurs due to the sharing of the same channel by many
mobile users. At the base station, the demodulator is captured by the strongest
received mobile signal raising the noise floor for the weaker signals and decreasing
the probability of weak signal reception. In most of the CDMA applications, power
control is used to combat the near-far problem. In a cellular system, each base
station provides power control to assure same signal level to the base station
receiver from each mobile within the coverage area of the base station and solves
the overpowering to the base station receiver by a nearby user drowning out the
signals of faraway users.
In CDMA, the actual data are mixed with the output of a PN coder to perform
the scrambling process. The scrambled data obtained after scrambling process are
then modulated using BPSK or QPSK modulator as shown in Fig. 2.4. The BPSK
or QPSK modulated data are then transmitted.
30 2 Performance of Digital Communication Over Fading Channels

2.2.1.1 BER Performance of DS-CDMA in AWGN and Rayleigh


Fading Channels

Let us consider a single cell with K users with each user having a PN sequence
length N chips per message symbol. The received signal will consist of the sum of
the desired user, K − 1 undesired users transmitted signals and additive noise.
Approximating the total multiple access interference caused by the K − 1 users as a
Gaussian random variable, the BER for DS-CDMA in AWGN channel is given [3]
by
0 1
B 1 C
Pb;CDMA ðAWGNÞ ¼ Q@qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiA ð2:17Þ
3N þ 2Eb
K1 N0

The BER for DS-CDMA in Rayleigh fading channel can be expressed [7] as
0 1
1B 1 C
Pb;CDMAðRayÞ ¼ @1  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiA ð2:18Þ
2 1þ N0
þ K1
2Eb r2 3N

where r2 is the variance of the Rayleigh fading random variable.


The following MATLAB program is used to compute theoretic BER of DS-
CDMA in AWGN and Rayleigh fading channels.

Program 2.4 Program to compute BER performance of DS-CDMA in AWGN, and


Rayleigh fading channels
2.2 Wireless Communication Techniques 31

Mod-2 BPSK
Adder Modulator
Data Bit
Stream RF output

PN Local
Generator Oscillator

Fig. 2.4 Scrambler system using BPSK modulation

0
10
AWGN channel
Rayleigh channel

-1
10
Bit Error Rate

-2
10

-3
10

-4
10
5 10 15 20 25 30
Number of users

Fig. 2.5 BER performance of DS-CDMA in AWGN and Rayleigh fading channels for
N ¼ 31; r2 ¼ 1, and NEb0 ¼ 10 dB

The BER performance from the above program for DS-CDMA in the AWGN
and Rayleigh channels for N = 31, r2 ¼ 1, and NEb0 ¼ 20 dB is depicted in Fig. 2.5.
From Fig. 2.5, it is observed that the BER performance of DS-CDMA is better in
AWGN channel as compared to Rayleigh fading channel. Further, with an
increased number of users, the BER performance decreases in both the channels.
32 2 Performance of Digital Communication Over Fading Channels

2.2.2 FH-CDMA

In FH-CDMA, each data bit is divided over a number of frequency-hop channels


(carrier frequencies). At each frequency-hop channel, a complete PN sequence of
length N is combined with the data signal. Applying fast frequency hopping (FFH)
requires a wider bandwidth than slow frequency hopping (SFH). The difference
between the traditional slow and FFH schemes can be visualized as shown in
Fig. 2.6. A slow hopped system has one or more information symbols per hop or
slot. It is suitable for high-capacity wireless communications. A fast hopped system
has the hopping rate greater than the data rate. During one information symbol, the
system transmits over many bands with short duration. It is more prevalent in
military communications.
In FH-CDMA, modulation by some kind of the phase-shift keying is quite
susceptible to channel distortions due to several frequency hops in each data bit.
Hence, an FSK modulation scheme is to be chosen for FH-CDMA.
The hop set, dwell time, and hop rate with respect to FHCDMA are defined as
Hop set It is the number of different frequencies used by the system.
Dwell time It is defined as the length of time that the system spent on one
frequency for transmission.
Hop rate It is the rate at which the system changes from one frequency to
another.

Fig. 2.6 Slow and fast


hopping

t
f3

f2

f1

Slow hopping 3bits/hop t


f3

f2

f1

Fast hopping 3hops/bit


t
2.2 Wireless Communication Techniques 33

2.2.2.1 BER Expression for Synchronous SFH-CDMA

Consider a SFH-CDMA channel with K active users and q (frequency) slots. The hit
probability is the probability that a number of interfering users are transmitting on
the same frequency-hop channel as the reference user. This probability will be
referred to as Ph ðKÞ where K is the total number of active users.
The probability of hitting from a given user is given by [8]
 
1 1
P¼ 1þ ð2:19Þ
q Nb

where Nb is the number of bits per hop and q stands for the number of hops. The
primary interest for our analysis is the probability Ph of one or more hits from the
K  1 users is given by

Ph ¼ 1  ð1  PÞK1 ð2:20Þ

By substituting “P” value from Eq. (2.19) in Eq. (2.20), we get the probability of hit
from K  1 users as
  K1
1 1
P h ðK Þ ¼ 1  1  1þ ð2:21Þ
q Nb

If it is assumed that all users hop their carrier frequencies synchronously, the
probability of hits is given by
 K1
1
Ph ¼ 1  1  ð2:22Þ
q

For large q,
 
1 K1 K  1
Ph ðK Þ ¼ 1  1   ð2:23Þ
q q

The probability of bit error for synchronous MFSK SFH-CDMA when the
K number of active users is present in the system can be found by [9]

K 
X 
K1
PSFH ðK Þ ¼ Pkh ð1  Ph ÞK1k PMFSK ðK Þ ð2:24Þ
k¼1
k

where PMFSK ðKÞ denotes the probability of error when the reference user is hit by
all other active users. Equation (2.24) is the upper bound of the bit error probability
of the SFH-CDMA system. The PMFSK ðKÞ for the AWGN and flat fading channels
can be expressed as [10]
34 2 Performance of Digital Communication Over Fading Channels

8 M1    Eb 
> P ð1Þiþ1 M  1 iN
>
< iþ1 exp  iþ1 0
AWGN
i
PMFSK ðK Þ ¼ i¼1   ð2:25Þ
>
> P ð1Þiþ1 M  1
M1
: Eb Rayleigh fading
i¼1 1þiþiN0 i

The following MATLAB program computes theoretic BER of SFH-CDMA in


AWGN and Rayleigh fading channels.

Program 2.5 Program to compute BER performance of SFH-CDMA in AWGN,


and Rayleigh fading channels

The BER performance from the above program for SFH-CDMA in the AWGN
and Rayleigh channels with q = 32 and M = 2 (BFSK) at NEb0 ¼ 10 dB is depicted in
Fig. 2.7.
2.2 Wireless Communication Techniques 35

-1
10

AWGN
-2 Rayleigh
10
Bit Error Rate

-3
10

-4
10
0 5 10 15 20 25 30
Number of users

Fig. 2.7 BER performance of SFH-CDMA in AWGN and Rayleigh fading channels with q = 32
and M = 2(BFSK) at NEb0 ¼ 10 dB

Insert
S/P IFFT P/S Cyclic DAC Up
Prefix Converter

Fig. 2.8 Schematic block diagram of OFDM transmitter

2.2.3 OFDM

The block diagram of OFDM transmitter is shown in Fig. 2.8. In OFDM, the input
data are serial-to-parallel converted (the S/P block). Then, the inverse fast Fourier
transform (IFFT) is performed on the N parallel outputs of the S/P block to create an
OFDM symbol.
The complex numbers in the output of the IFFT block are parallel-to-serial
converted (P/S). Then, the cyclic prefix is inserted in order to combat the
36 2 Performance of Digital Communication Over Fading Channels

Fig. 2.9 Inserting cyclic prefix

Output
Remove
Down Data
ADC Cyclic S/P FFT P/S
Con- Prefix
verter

Fig. 2.10 Schematic block diagram of OFDM receiver

intersymbol interference (ISI) and intercarrier interference (ICI) caused by the


multipath channel. To create the cyclic prefix, the complex vector of length at the
end of the symbol duration T is copied and appended to the front of the signal block
as shown in Fig. 2.9. The schematic block diagram of the OFDM receiver is shown
in Fig. 2.10. It is the exact inverse of the transmitter shown in Fig. 2.8.

2.2.4 MC-CDMA

MC-CDMA is a combination of OFDM and CDMA having the benefits of both


OFDM and CDMA. In MC-CDMA, frequency diversity is achieved by modulating
symbols on many subcarriers instead of modulating on one carrier like in CDMA.
In MC-CDMA, the same symbol is transmitted through many subcarriers in par-
allel, whereas in OFDM, different symbols are transmitted on different subcarriers.
The block diagram of the MC-CDMA system transmitter is shown in Fig. 2.11.
The block diagram of the MC-CDMA system receiver is shown in Fig. 2.12. In the
2.2 Wireless Communication Techniques 37

OFDM Modulator

Fig. 2.11 Block diagram of MC-CDMA transmitter

OFDM Demodulator

Fig. 2.12 Block diagram of MC-CDMA receiver

receiver, the cyclic prefix is removed and FFT is performed to obtain the signals in the
frequency domain.

2.2.4.1 BER Expression for Synchronous MC-CDMA

Assuming a synchronous MC-CDMA system with K users, N subcarriers, and


binary phase-shift keying (BPSK) modulation, the BER for MC-CDMA in slowly
varying Rayleigh fading channel can be calculated using the residue method by [11]

X
N c 1
 
ð2cÞNc Nc  1
PMCCDMA; Rayleigh ðK Þ ¼ ðNc  1  kÞ!
½ðNc  1Þ!2 k¼0
k

ðNc  1  kÞ!ðc þ d ÞðNc kÞ ð2d ÞðNc þkÞ ð2:26Þ

where k stands for the number of users, Nc denotes the number of subcarriers, and
the parameters c and d are defined by
38 2 Performance of Digital Communication Over Fading Channels

0
10

-1
10

-2
10
Bit Error Rate

-3 DS-CDMA
10
SFH-CDMA
MC-CDMA with 64 subcarriers

-4
10

-5
10
0 5 10 15 20 25 30
Number of users

Fig. 2.13 BER performance of DS-CDMA, SFH-CDMA, and MC-CDMA in Rayleigh fading
channels at NEb0 ¼ 10 dB

1 Nc kþ1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
¼ þ ; d¼ c2 þ 2c ð2:27Þ
2c 4Eb= 4
N 0

A theoretical BER performance comparison of DS-CDMA, SFH-CDMA, and


MC-CDMA in Rayleigh fading channels at NEb0 ¼ 10 dB is shown in Fig. 2.13.
From Fig. 2.13, it is observed that MC-CDM outperforms both the DS-CDMA
and SFH-CDMA.

2.3 Diversity Reception

Two channels with different frequencies, polarizations, or physical locations


experience fading independently of each other. By combing two or more such
channels, fading can be reduced. This is called diversity.
On a fading channel, the SNR at the receiver is a random variable, the idea is to
transmit the same signal through r separate fading channels. These are chosen so as
to provide the receiver with r independent (or close-to-independent) replicas of the
same signal, giving rise to independent SNRs. If r is large enough, then at any time
instant, there is a high probability that at least one of the signals received from the
2.3 Diversity Reception 39

y1

yr

Fig. 2.14 Diversity and combining

r “diversity branches” is not affected by a deep fade and hence that its SNR is above
a critical threshold. By suitably combining the received signals, the fading effect
will be mitigated (Fig. 2.14).
Many techniques have been advocated for generating the independent channels
on which the diversity principle is based, and several methods are known for
^
combining the signals y1, …, yr obtained at their outputs into a single channel y .
Among the categorized techniques, the most important ones are as follows:
1. Space diversity
2. Polarization diversity
3. Frequency diversity
4. Time diversity
5. Cooperative diversity
Space diversity: To obtain sufficient correlation, the spacing between the r
separate antennas should be wide with respect to their coherent distance while
receiving the signal. It does not require any extra spectrum occupancy and can be
easily implemented.
Polarization diversity: Over a wireless channel, multipath components polar-
ized either horizontally or vertically have different propagation. Diversity is pro-
vided when the receiving signal uses two different polarized antennas. In another
way, two cross-polarized antennas with no spacing between them also provide
diversity. Cross-polarized are preferred since they are able to double the antenna
numbers using half the spacing being used for co-polarized antennas. Polarized
diversity can achieve more gain than space diversity alone in reasonable scattering
areas, and hence, it is deployed in more and more BSs.
Frequency diversity: In order to obtain frequency diversity, the same signal
over different carrier frequencies should be sent whose separation must be larger
than the coherence bandwidth of the channel.
Time diversity: This is obtained by transmitting the same signal in different time
slots separated by a longer interval than the coherence time of the channel.
Cooperative diversity: This is obtained by sharing of resources by users or
nodes in a wireless network and transmits cooperatively. The users or nodes act like
an antenna array and provide diversity. This type of diversity can be achieved by
combining the signals transmitted from the direct and relay links.
40 2 Performance of Digital Communication Over Fading Channels

2.3.1 Receive Diversity with N Receive Antennas in AWGN

The received signal on the ith antenna can be expressed as

y i ¼ hi x þ gi ð2:28Þ

where
yi is the symbol received on the ith receive antenna,
hi is the channel gain on the ith receive antenna,
x is the input symbol transmitted, and
gi is the noise on the ith receive antenna.
The received signal can be written in matrix form as

y ¼ hx þ n

where
y ¼ ½y1 y2 . . .yN T is the received symbol from all the receive antenna,
h ¼ ½h1 h2 . . .hN T is the channel on all the receive antenna,
x is the transmitted symbol, and
g ¼ ½g1 g2 . . .gN T is the AWGN on all the receive antenna.

Eb Eb
Effective N0 with N receive antennas is N times N0 for single antenna. Thus, the
Eb
effective N0 for N antennas in AWGN can be expressed as

Eb NEb
¼ ð2:29Þ
N0 eff;N N0

So the BER for N receive antennas is given by


rffiffiffiffiffiffiffiffiffi
1 NEb
Pb ¼ erfc ð2:30Þ
2 N0

2.4 Diversity Combining Techniques

The three main combining techniques that can be used in conjunction with any of
the diversity schemes are as follows:
1. Selection combining
2. Equal gain combining (EGC)
3. Maximal ratio combining
2.4 Diversity Combining Techniques 41

2.4.1 Selection Diversity

In this combiner, the receiver selects the antenna with the highest received signal
power and ignores observations from the other antennas.

2.4.1.1 Expression for BER with Selection Diversity

Consider N independent Rayleigh fading channels, each channel being a diversity


branch. It is assumed that each branch has the same average signal-to-noise ratio

Eb  2 
c ¼ E h ð2:31Þ
N0

The outage probability is the probability that the bit energy-to-noise ratio falls
below a threshold (c). The probability of outage on ith receive antenna can be
expressed by

Zc
1 cci ci
Pout;ci ¼ P½ci \c ¼ e dci ¼ 1  e c ð2:32Þ
c
0

The joint probability is the product of the individual probabilities if the channel
on each antenna is assumed to be independent; thus, the joint probability with
N receiving antennas becomes

Pout ¼ P½c1 \cP½c2 \c    P½cN \c


h i
ci N ð2:33Þ
¼ 1  e c

where c1 ; c2 ;    ; cN are the instantaneous bit energy-to-noise ratios of the 1st, 2nd,
and so on till the nth receive antenna.
Equation (2.33) is in fact the cumulative distribution function (CDF) of c. Then,
the probability density function (PDF) is given by the derivate of the CDF as

dPout N cci h i
ci N1
PðcÞ ¼ ¼ e 1  e c ð2:34Þ
dc c

Substituting Eq. (2.34) in Eq. (2.1), BER for selective diversity can be expressed
by

Z1
1 pffiffiffi N ci h i
ci N1
BERSEL ¼ erfcð cÞ e c 1  e c dc ð2:35Þ
2 c
0
42 2 Performance of Digital Communication Over Fading Channels

Assuming a2 ¼ 1, the above expression can be rewritten as [12]


0 112
 
1X N
N @ k
BERSEL ¼ ð1Þk 1 þ  A ð2:36Þ
2 k¼0 k Eb
N0

2.4.2 Equal Gain Combining (EGC)

In EGC, equalization is performed on the ith receive antenna at the receiver by


dividing the received symbol yi by the a priori known phase of channel hi . jhi jejhi
represents the channel hi in polar form. The decoded symbol is obtained by

X yi X jhi jejhi x þ gi X 
^y ¼ jh
¼ jhi
¼ jhi jx þ gi ð2:37Þ
i
e i
i
e i

where
^y is the sum of the phase compensated channel from all the receiving
antennas and

gi ¼ egjhi i is the additive noise scaled by the phase of the channel coefficient.

2.4.2.1 Expression for BER with Equal Gain Combining

The BER with EGC with two receive antennas can be expressed with BPSK and
BFSK modulations as [13]
" pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi#
1 Eb =N0 ðEb =N0 þ 2Þ
BEREGC;BPSK ¼ 1 ð2:38Þ
2 Eb =N0 þ 1
" pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi#
1 Eb =N0 ðEb =N0 þ 4Þ
BEREGC;BFSK ¼ 1 ð2:39Þ
2 Eb =N0 þ 2

2.4.3 Maximum Ratio Combining (MRC)

2.4.3.1 Expression for BER with Maximal Ratio Combining (MRC)

For channel hi , the instantaneous bit energy-to-noise ratio at ith receive antenna is
given by
j hi j 2 E b
ci ¼ ; ð2:40Þ
N0
2.4 Diversity Combining Techniques 43

If hi is a Rayleigh distributed random variable, then h2i is a chi-squared random


variable with two degrees of freedom. Hence, the pdf of ci can be expressed as

1 ci
Pdf ðci Þ ¼ eðEb =N0 Þ ð2:41Þ
ðEb =N0 Þ

Since the effective bit energy-to-noise ratio c is the sum of N such random vari-
ables, the pdf of c is a chi-square random variable with 2N degrees of freedom.
Thus, the pdf of c is given by

1 c
Pdf ðcÞ ¼ N cN1 eðEb =N0 Þ ; c0 ð2:42Þ
ðN  1Þ!ðEb =N0 Þ

Substituting Eq. (2.42) in Eq. (2.1), BER for maximal ratio combining can be
expressed by

Z1
1 pffiffiffi
BERMRC ¼ erfcð cÞPdf pðcÞdc
2
0
ð2:43Þ
Z1
1 pffiffiffi 1 c
¼ erfcð cÞ N cN1 eðEb =N0 Þ dc
2 ðN  1Þ!ðEb =N0 Þ
0

The above expression can be rewritten [12] as

N 1
X 
N 1þk
BERMRC ¼ PN ð1  PÞk ð2:44Þ
k¼0
k

where
 1=2
1 1 1
P¼  1þ
2 2 Eb =N0

The following MATLAB program computes the theoretic BER for BPSK
modulation in Rayleigh fading channels with selective diversity, EGC, and MRC.
44 2 Performance of Digital Communication Over Fading Channels

Program 2.6 Program for computing the theoretic BER for BPSK modulation in a
Rayleigh fading channel with selection diversity, EGC and MRC

The BER performance from the above program with two receive antennas is
shown in Fig. 2.15. From Fig. 2.15, it is observed that the BER with MRC is better
than selective diversity and EGC and outperforms the single antenna case.
Example 2.1 What is the BER for Eb =N0 ¼ 8 dB at the receiver output in an
AWGN channel if coherently demodulated BPSK modulation is used and if no
error control coding is used.
Solution For BPSK modulation in AWGN channel, BER is given by
rffiffiffiffiffiffi
1 Eb
BERBPSK; AWGN ¼ erfc
2 N0
Eb
¼ 10ð8=10Þ ¼ 6:3096
N0

Thus,

1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
BERBPSK; AWGN ¼ erfc 6:3096 ¼ 0:0001909:
2

Example 2.2 Using the system in the problem1, compute the coding gain that will
be necessary if the BER is to be improved to 106 .
2.4 Diversity Combining Techniques 45

Rayleigh
-1
selection(nRx=2)
10 EGC(nRx=2)
MRC(nRx=2)

-2
10
BER

-3
10

-4
10

-5
10
0 2 4 6 8 10 12 14 16 18 20
Eb/No, dB

Fig. 2.15 Theoretic BER for BPSK modulation in a Rayleigh fading channel with selection
diversity, EGC, and MRC

Solution Here,
rffiffiffiffiffiffi
1 Eb
0:000001 ¼ erfc
2 N0

rffiffiffiffiffiffi
Eb
¼ erfcinvð0:000002Þ ¼ 3:3612
N0

Eb Eb
¼ ð3:3612Þ2 ¼ 11:29; ðdBÞ ¼ 10 log10 ð11:29Þ ¼ 10:5269
N0 N0

Hence, necessary coding gain = 10.5269 − 8.0 = 2.5269 dB.


Example 2.3 Determine the coding gain required to maintain a BER of 104 when
the received Eb/No is fixed, and the modulation format is changed from BPSK to
BFSK.
46 2 Performance of Digital Communication Over Fading Channels

Solution For BPSK in AWGN channel,


rffiffiffiffiffiffi
1 Eb
0:0001 ¼ erfc
2 N0
rffiffiffiffiffiffi
Eb
¼ erfcinvð0:0002Þ ¼ 2:2697
N0

Eb Eb
¼ ð2:2697Þ2 ¼ 6:9155; ðdBÞ ¼ 10 log10 ð6:9155Þ ¼ 8:3982
N0 N0

For BFSK in AWGN channel:


 
1 Eb
BERBFSK; AWGN ¼ 0:0001 ¼ exp 
2 2N0

Eb Eb
¼ 2 lnð0:0002Þ ¼ 17:0344; ðdBÞ ¼ 10 log10 ð17:0344Þ ¼ 12:3133
N0 N0

Hence, necessary coding gain ¼ 12:3133  8:3982 ¼ 3:9151 dB:


Example 2.4 Determine the coding gain required to maintain a BER of 103 when
the received Eb/No remains fixed and the modulation format is changed from BPSK
to 8-PSK in AWGN channel.
Solution For BPSK in AWGN channel,
rffiffiffiffiffiffi
1 Eb
0:001 ¼ erfc
2 N0
rffiffiffiffiffiffi
Eb
¼ erfcinvð0:002Þ ¼ 2:1851
N0

Eb Eb
¼ ð2:1851Þ2 ¼ 4:7748; ðdBÞ ¼ 10 log10 ð4:7748Þ ¼ 6:7895
N0 N0

From Eq. (2.6), for 8-PSK in AWGN channel,


  rffiffiffiffiffiffiffiffi
2 p 6Eb
BER8PSK ¼ Q sin
3 8 N0
  rffiffiffiffiffiffiffiffi  rffiffiffiffiffiffiffiffi
2 p 6Eb 2 6Eb
0:001 ¼ Q sin ¼ Q 0:3827
3 8 N0 3 N0
2.4 Diversity Combining Techniques 47

Since,
 
1 x
Qð xÞ ¼ erfc pffiffiffi
2 2
 rffiffiffiffiffiffiffiffi
0:003 1 0:3827 6Eb
¼ erfc pffiffiffi
2 2 2 N0
 rffiffiffiffiffiffiffiffi
6Eb
0:003 ¼ erfc 0:6629
N0
rffiffiffiffiffiffi
Eb 1 2:0985
¼ erfcinvð0:003Þ ¼ ¼ 3:1656
N0 0:6629 0:6629

Eb Eb
¼ ð3:1656Þ2 ¼ 10:0210; ðdBÞ ¼ 10 log10 ð10:0210Þ ¼ 10:0091
N0 N0

Hence, necessary coding gain ¼ 10:0091  6:7895 ¼ 3:2196 dB.

2.5 Problems

1. An AWGN channel requires NEb0 ¼ 9:6 dB to achieve BER of 105 using BPSK
modulation. Determine the coding gain required to achieve BER of 105 in a
Rayleigh fading channel using BPSK.
2. Using the system in Problem 1, determine the coding gain required to maintain a
BER of 105 in Rayleigh fading channel when the modulation format is
changed from BPSK to BFSK.
3. Determine the necessary NEb0 for a Rayleigh fading channel with an average BER
of 105 in order to detect (i) BPSK and (ii) BFSK.
4. Determine the necessary NEb0 in order to detect BFSK with an average BER of
104 for a Rician fading channel with Rician factor of 5 dB.
5. Determine the probability of error as a function of NEb0 for 4-QAM. Plot NEb0 vs
probability of error and compare the results with BPSK and non-coherent BFSK
on the same plot.
6. Obtain an approximations to the outage capacity in a Rayleigh fading channel:
(i) at low SNRs and (ii) at high SNRs.
7. Obtain an approximation to the outage probability for the parallel channel with
M Rayleigh branches.
8. Assume three-branch MRC diversity in a Rayleigh fading channel. For an average
SNR of 20 dB, determine the outage probability that the SNR is below 10 dB.
48 2 Performance of Digital Communication Over Fading Channels

2.6 MATLAB Exercises

1. Write a MATLAB program to simulate the BER versus number of users per-
formance of SFH-CDMA in AWGN and Rayleigh fading channels at different NEb0 .
2. Write a MATLAB program to simulate the performance of OFDM in AWGN
and Rayleigh fading channels.
3. Write a MATLAB program to simulate the BER versus number of users per-
formance of MC-CDMA in AWGN and Rayleigh fading channels for different
number of subcarriers at different NEb0 .
4. Write a MATLAB program to simulate the performance of selection diversity,
equal gain combiner, and maximum ratio combiner and compare the perfor-
mance with the theoretical results.

References

1. Lu, J., Lataief, K.B., Chuang, J.C.I., Liou, M.L.: M-PSK and M-QAM BER computation
using single space concepts. IEEE Trans. Commun. 47, 181–184 (1999)
2. Proakis, J.G.: Digital Communications, 3rd edn. McGraw-Hill, New York (1995)
3. Rappaport, T.S.: Wireless Communications: Principles and Practice. IEEE Press, Piscataway
(1996)
4. Lindsey, W.C.: Error probabilities for Rician fading multichannel reception of binary and
n-ary Signals. IEEE Trans. Inf. Theory IT-10(4), 333–350 (1964)
5. Lu, J., Lataief, K.B., Chuang, J.C.-I., Liou, M.L.: M-PSK and M-QAM BER computation
using a signal-space concepts. IEEE Trans. Commun. 47(2), 181–184 (1999)
6. Simon, M.K., Alouinii, M.-S.: Digital Communication Over Fading Channels: A Unified
Approach to Performance Analysis. Wiley, New York (2000)
7. Cheng, J., Beaulieu, N.C.: Accurate DS-CDMA bit-error probability calculation in Rayleigh
fading. IEEE Trans. Wireless Commun. 1(1), 3 (2002)
8. Geraniotis, E.A., Parsley, M.B.: Error probabilities for slow-frequency-hopped spread-
spectrum multiple-access communications over fading channels. IEEE Trans. Commun. Com-
30(5), 996 (1982)
9. yang, L.L., Hanzo, L.: Overlapping M-ary frequency shift keying spread-spectrum multiple-
access systems using random signature sequences. IEEE Trans. Veh. Technol. 48(6), 1984
(1999)
10. Goh, J.G., Maric, S.V.: The capacities of frequency-hopped code-division multiple-access
channels. IEEE Trans. Inf. Theory 44(3), 1204–1211 (1998)
11. Shi, Q., Latva-Aho, M.: Exact bit error rate calculations for synchronous MC-CDMA over a
Rayleigh fading channel. IEEE Commun. Lett. 6(7), 276–278 (2002)
12. Barry, J.R., Lee, E.A., Messerschmitt, D.G.: Digital Communication. Kluwer Academic
Publishers, Massachusetts (2004)
13. Zhang, Q.T.: Probability of error for equal-gain combiners over rayleigh channels: some
closed- form solutions. IEEE Trans. Commun. 45(3), 270–273 (1997)
Chapter 3
Galois Field Theory

A small portion of linear algebra and combinatorics are used in the development of
Hamming codes, the first generation error control codes. The design of error control
codes such as BCH codes and Reed Solomon codes relies on the structures of Galois
fields and polynomials over Galois fields. This chapter presents briefly algebraic
tools for understanding of Galois field theory used in error-correcting codes design.

3.1 Set

A set is defined as an arbitrary collection of objects or elements. The presence of an


element X in the set S is denoted by X 2 S, and if X is certainly not in S, it is
denoted by X 62 S. An empty set contains zero elements. A set Y is called a subset of
a set X if and only if every element Y is in X. Y is a subset of X and is often denoted
by Y  X which reads “Y is containing in X.”
Consider two sets, S1 and S2 , the new set S1 [ S2 is called the union of S1 and S2
having the elements in either S1 or S2 , or both. Another set S1 \ S2 is called the
intersection of S1 and S2 having the common elements in S1 and S2 . If the inter-
section of two sets is empty, they are said to be disjoint.

3.2 Group

A group is a set on which a binary multiplication operation “·” is defined such that
the following requirements satisfied
1. For any elements a and b in G, a· b is an element in G
2. For any elements a, b, and c in G, the following associative law

a  ð b  c Þ ¼ ð a  bÞ  c

© Springer India 2015 49


K. Deergha Rao, Channel Coding Techniques for Wireless Communications,
DOI 10.1007/978-81-322-2292-7_3
50 3 Galois Field Theory

3. There is an element e in G such that for every a in G

a  e ¼ e  a ¼ a ðidentityÞ

4. For any elements a in G, there is an element a1 in G such that

a  a1 ¼ a1  a ¼ e ðinverseÞ

3.3 Field

If the addition and multiplication operations are defined on a set of objects F, then
F is said to be a field if and only if
1. F forms a commutative group under addition
2. F forms a commutative group under multiplication
3. Addition and multiplications are distributive.

a  ðb þ cÞ ¼ ða  bÞ þ ða  cÞ

The elements F ¼ f1; 2; 3; . . .; p  1g forms a commutative group of order ðp  1Þ


under modulo p multiplication if and only if p is a prime integer.
All elements of a field form an additive commutative group, whereas all the
nonzero elements of a field form a multiplicative commutative group. It is very
useful to construct the finite fields. A Galois field that is particularly interesting to
the coding theory is a field of finite order.
A Galois field of order q is usually denoted by GF(q). The simplest of the Galois
fields is GF(2). GF(2) can be represented by the two-element set {0, 1} under
standard binary addition and multiplication. The modulo 2 addition and multipli-
cation are shown in Table 3.1.
Galois fields of size p, p a prime, can be constructed by modulo addition and
multiplication. If these two operations are allowed to distribute, then a field is
formed. The integers f0; 1; 2; . . .; p  1g; form the field GFð pÞ under modulo
p addition and multiplication. The field GF(3) has the elements {0, 1, 2}. Finite
fields GFðqÞ do not exist for all values of q. The value of q must be equal to pm ,

Table 3.1 Addition and


Modulo 2 addition Modulo 2 multiplication
multiplication for GF(2)
+ 0 1 · 0 1
0 0 1 0 0 0
1 1 0 1 0 1
3.3 Field 51

Table 3.2 Addition and multiplication for GF(7)


Modulo 7 addition Modulo 7 multiplication
+ 0 1 2 3 4 5 6 · 0 1 2 3 4 5 6
0 0 1 2 3 4 5 6 0 0 0 0 0 0 0 0
1 1 2 3 4 5 6 0 1 0 1 2 3 4 5 6
2 2 3 4 5 6 0 1 2 0 2 4 6 1 3 5
3 3 4 5 6 0 1 2 3 0 3 6 2 5 1 4
4 4 5 6 0 1 2 3 4 0 4 1 5 2 6 3
5 5 6 0 1 2 3 4 5 0 5 3 1 6 4 2
6 6 0 1 2 3 4 5 6 0 6 5 4 3 2 1

where p is a prime positive integer and m is a positive integer. The finite fields of
order pm can be constructed as vector spaces over the prime order field GFð pÞ.
Example 3.1 Construct addition and multiplication tables over GF(7).
Solution Here, p equals to 7; therefore, the elements of the GF are
ð0; 1; 2; 3; 4; 5; 6Þ. The addition and multiplication over GF(7) will be modulo 7 as
shown in Table 3.2.

3.4 Vector Spaces

Let V be a set of vector elements on which a binary addition operation is defined.


Let F be a field with scalar elements, and a scalar multiplication operation is defined
between the elements of V and the scalar elements of F. V forms a vector space over
F if the following properties are satisfied.
Vector spaces
1. V is a commutative group under addition operation on V
2. For any element a 2 F and any element v 2 V; a  v 2 V
3. For any elements a; b 2 F and any element v 2 V; the following associativity
law is satisfied

ða  bÞ  v ¼ a  ðb  vÞ ð3:1Þ

4. For any elements a; b 2 F and any elements u; v 2 V, the following distributive


law is satisfied

a  ð u þ vÞ ¼ a  u þ a  v ð3:2Þ

ð a þ bÞ  v ¼ a  v þ b  v ð3:3Þ

5. If 1 is the unit element 2 F, for any element v 2 V, 1  v ¼ v.


52 3 Galois Field Theory

In the case of vector spaces over the scalar field GF(2), V is a collection of binary n-
tuples such that if v1 ; v2 2 V, then v1 þ v2 2 V, where þ stands for component-wise
exclusive-or operation. If v1 ¼ v2 , 0 2 V:
Theorem 3.1 Let v1 ; v2 ; . . .; vk is a set of vectors in a vector space V over a finite
field F with dimension k scalars, there is a unique representation of every vector v
in V is

v ¼ a1 v 1 þ a2 v 2 þ    þ ak v k ð3:4Þ

3.5 Elementary Properties of Galois Fields

1. Let a be an element in GFðqÞ. The order of a is the smallest positive integer n


such that an ¼ 1:
2. The order q of a Galois field GFðqÞ must be a power of a prime.
3. Every GFðqÞ has at least one element α of order ðq  1Þ, which is called a
primitive element and that exists in a GFðqÞ such that aðq1Þ ¼ 1:
4. All nonzero elements in GFðqÞ are represented by the ðq  1Þ consecutive
powers of a primitive element a:
5. Let a be a nonzero element in a Galois field GFðqÞ and n be the order of a, then
n divides q  1:

3.6 Galois Field Arithmetic

Finite field arithmetic is different from standard integer arithmetic. In finite field
arithmetic, all operations performed on limited number of elements and resulted in
an element within the same field.

3.6.1 Addition and Subtraction of Polynomials

In standard integer arithmetic, addition and subtraction of polynomials are per-


formed by adding or subtracting together, whereas in finite field, addition and
subtraction are accomplished using the XOR operator and they are identical.
   
Example 3.2 Add the polynomials x6 þ x4 þ x þ 1 and x7 þ x6 þ x3 þ x
in GF(2).
3.6 Galois Field Arithmetic 53

Table 3.3 Computation of polynomials in normal algebra and Galois field


p1 p2 p1 þ p2 (normal algebra) p1 þ p2 (GF)
x3 þ x2 þ x þ 1 x3 þ x2 2x3 þ 2x2 þ x þ 1 xþ1
x4 þ x3 þ x2 x5 þ x2 x5 þ x4 þ x3 þ 2x2 x5 þ x4 þ x3
x þ1
2
x þ1
3
x þx þ2
3 2
x3 þ x2

Solution
   
x6 þ x4 þ x þ 1 þ x7 þ x6 þ x3 þ x ¼ x7 þ x4 þ x3 þ 1:

The normal algebraic sum and the modulo 2 finite field sum of a few polynomials
are tabulated in Table 3.3.

3.6.2 Multiplication of Polynomials

Multiplication of polynomials in Galois field is same as integer arithmetic, but the


addition performed after multiplication is similar to Galois field.
   
Example 3.3 Multiply the polynomials x6 þ x4 þ x þ 1 and x7 þ x6 þ x3 þ x .
Solution
  
x6 þ x4 þ x þ 1 x7 þ x6 þ x3 þ x
¼ x13 þ x12 þ x9 þ x7 þ x11 þ x10 þ x7 þ x5 þ x8 þ x7 þ x4
þ x2 þ x7 þ x6 þ x3 þ x
¼ x13 þ x12 þ x9 þ x11 þ x10 þ x5 þ x8 þ x4 þ x2 þ x6 þ x3 þ x
¼ x13 þ x12 þ x11 þ x10 þ x9 þ x8 þ x6 þ x5 þ x4 þ x3 þ x2 þ x

3.6.3 Multiplication of Polynomials Using MATLAB

The following MATLAB command computes the multiplication of polynomial p1


and polynomial p2 in GF(2).

p3 ¼ gfconvðp1 ; p2 Þ

The degree of the resulting GF(2) polynomial p3 equals the sum of degree of the
polynomial p1 and degree of the polynomial p2 . For example, the following
54 3 Galois Field Theory

commands result in the multiplication of the polynomials 1 þ x þ x3 and


1 þ x þ x2 þ x4 :

p1 ¼ ½ 1 1 0 1 %1 þ x þ x3
p2 ¼ ½ 1 1 1 0 1 %1 þ x þ x2 þ x4
p3 ¼ gfconvðp1 ; p2 Þ; %ð1 þ x þ x3 Þ  ð1 þ x þ x2 þ x4 Þ

The output p3 for the above commands is

p3 ¼ ½ 1 0 0 0 0 0 0 1 %1 þ x7

3.6.4 Division of Polynomials

Suppose that aðxÞ and bð xÞ 6¼ 0 are polynomials over GF(2). There are unique pair
of polynomial called the quotient and remainder, qðxÞ and rðxÞ over GF(2), such
that

aðxÞ ¼ qðxÞ bðxÞ þ rðxÞ ð3:5Þ

Example 3.4 Divide f1 ð xÞ ¼ 1 þ x2 þ x3 þ x5 by f2 ðxÞ ¼ 1 þ x3 þ x4 .


Solution

It can easily be verified that


 
1 þ x2 þ x3 þ x5 ¼ ð1 þ x3 þ x4 Þð1 þ xÞ þ x þ x2 :

If the remainder rðxÞ is zero, aðxÞ is divisible by bðxÞ and bðxÞ is a factor of aðxÞ:
Example 3.5 Check whether f1 ð xÞ ¼ ðx2 þ x þ 1Þ is a factor of f2 ð xÞ ¼ x5 þ x4 þ 1.
3.6 Galois Field Arithmetic 55

Solution

The remainder is zero, and hence, f2 ðxÞ is divisible by f1 ð xÞ, and f1 ð xÞ is a factor
of f2 ðxÞ.

3.6.5 Division of Polynomials Using MATLAB

The following MATLAB command computes the quotient q and remainder r of the
division of polynomial p2 by polynomial p1 in p1 GF(2).

½q; r ¼ gfdeconvðp2 ; p1 Þ

For example, the following commands divide polynomial 1 þ x7 by polynomial


1 þ x þ x3

p1 ¼ ½ 1 1 0 1 %1 þ x þ x3
p2 ¼ ½ 1 0 0 0 0 0 0 1 %1 þ x7
½q; r  ¼ gfdeconv ðp2 ; p1 Þ; %1 þ x7 =1 þ x þ x3

The output q and r for the above commands are


q = [1 1 1 0 1 ]%1 + x + x2 + x4
r = 0.

3.7 Polynomials Over Galois Fields

A polynomial over GF(q) is of the following form

a0 þ a1 x þ a2 x 2 þ    þ an x n ð3:6Þ

of degree nðwith an 6¼ 0Þ and with coefficients fai g in the finite field GFðqÞ.
56 3 Galois Field Theory

3.7.1 Irreducible Polynomial

A polynomial pðxÞ is said to be irreducible in GFðqÞ, if pðxÞ has no divisor poly-


nomials in GFðqÞ of degree less than m but greater than zero.
Examples
1. x3 þ x2 þ 1 is irreducible in GF(2) as it is not factorable having degree of less
than 3.
2. x4 þ x2 þ 1 is not irreducible in GF(2), since the polynomial is divisible by the
polynomial x2 þ x þ 1 with coefficients in GF(2) and with degree of 2 less than
4.
3. x4 þ x3 þ x2 þ x þ 1 is irreducible in GF(2) as it is not factorable having factors
of degree less than 4.
4. x5 þ x4 þ 1 is not irreducible in GF(2) since the polynomial is divisible by
polynomials of degree less than 5.

3.7.2 Primitive Polynomials

An irreducible polynomial pðxÞ 2 GFð2Þ of degree m is said to be primitive if the


 integer n for which p(x) divides x  1 is n ¼ 2  1.
n m
smallest positive
The roots aj of an mth degree primitive polynomial pðxÞ 2 GF(2) have order
2m  1. All primitive polynomials are irreducible polynomials, but all irreducible
polynomials are not primitive.
Examples
1. x2 þ x þ 1 is primitive. The smallest polynomial of the form xn  1 for which it
is a divisor is x3  1 ð3 ¼ 22  1Þ.
2. x3 þ x2 þ 1 is primitive. The smallest polynomial of the form xn  1 for which it
is a divisor is x7  1 ð7 ¼ 23  1Þ.
3. x6 þ x5 þ x4 þ x3 þ x2 þ x þ 1 is not primitive since it is not irreducible. It can
be factorized as product of the polynomials x3 þ x2 þ 1 and x3 þ x þ 1.

3.7.3 Checking of Polynomials for Primitiveness Using


MATLAB

The following MATLAB command can be used to check whether the degree- m
GF(2) polynomial p is primitive.
ck ¼ gfprimckð pÞ;
3.7 Polynomials Over Galois Fields 57

The output ck is as follows:


ck ¼ 1 p is not an irreducible polynomial
ck ¼ 0 p is irreducible but not a primitive polynomial
ck ¼ 1 p is a primitive polynomial:
For example, the following MATLAB commands determine whether the polyno-
mial pð xÞ ¼ x6 þ x5 þ x4 þ x3 þ x2 þ x þ 1 is primitive or not
p ¼ ½ 1 1 1 1 1 1 1 %1 þ x þ x3 þ x3 þ x4 þ x5 þ x6
ck ¼ gfprimck(pÞ;
The output ck for the above commands is −1 indicating that the polynomial is not
irreducible and hence not primitive.

3.7.4 Generation of Primitive Polynomials Using MATLAB

Primitive polynomials of degree m can be generated using the following MATLAB


command primpoly as follows:

p ¼ primpolyðm;0 all0 Þ

For example, the primitive polynomials generated using the above m file for m = 3,
4, 5, and 6 are tabulated in Table 3.4.

Table 3.4 Primitive


m Primitive polynomial pðxÞ
polynomials for m = 3, 4, 5,
and 6 3 pð x Þ ¼ x 3 þ x þ 1
pð x Þ ¼ x 3 þ x 2 þ 1
4 pð x Þ ¼ x 4 þ x þ 1
pð x Þ ¼ x 4 þ x 3 þ 1
5 pð x Þ ¼ x 5 þ x 2 þ 1
pð x Þ ¼ x 5 þ x 3 þ 1
pð x Þ ¼ x 5 þ x 3 þ x 2 þ x þ 1
pð x Þ ¼ x 5 þ x 4 þ x 2 þ x þ 1
pð x Þ ¼ x 5 þ x 4 þ x 3 þ x þ 1
pð x Þ ¼ x 5 þ x 4 þ x 3 þ x 2 þ 1
6 pð x Þ ¼ x 6 þ x þ 1
pð x Þ ¼ x 6 þ x 4 þ x 3 þ x þ 1
pð x Þ ¼ x 6 þ x 5 þ 1
pð x Þ ¼ x 6 þ x 5 þ x 2 þ x þ 1
pð x Þ ¼ x 6 þ x 5 þ x 3 þ x 2 þ 1
pð x Þ ¼ x 6 þ x 5 þ x 4 þ x þ 1
58 3 Galois Field Theory

3.8 Construction of Galois Field GF(2m) from GF(2)


 
The 2m elements of GF(2m Þ can be written as 0; 1; a; a2 ; a3 ; . . .; a2 2 . All the
m

nonzero elements of GF(2m Þ are generated by the powers of the primitive element a
that satisfies the condition a2 1 ¼ 1.
m

The polynomial representation of the elements of GF(2m Þ is given by the


remainder of xn upon division by the polynomial pðxÞ which is primitive in GF(2).

an ¼ Remainderfxn =pðxÞg ð3:7Þ

Example 3.6 Construct GF(8) as a vector space over GF(2).


Solution Let us consider the construction of GF(8) based on the polynomial
pð xÞ ¼ x3 þ x þ 1 which is primitive in GF(2). Let a be a root of pðxÞ: This implies
that a3 þ a þ 1 ¼ 0 or equivalently a3 ¼ a þ 1. The distinct powers of a must have
ð23  1Þ distinct nonzero polynomial representations
 of a of degree 2 or less with
the coefficients from GF(2). The set 1; a; a2 is used as a basis for the vector space
representation of GF(8).
Since every field must contain zero and one element, we have

0¼0
a0 ¼ 1

Since the reminders of x and x2 upon division by the primitive polynomial pð xÞ ¼


x3 þ x þ 1 are themselves, the other two possible assignments are as follows:

a1 ¼ x
a2 ¼ x 2

However, the polynomial representation of x3 can be derived by the following


polynomial division:

The remainder is ðx þ 1Þ: Hence, a3 ¼ a þ 1


3.8 Construction of Galois Field GF(2m) from GF(2) 59

For x4

The remainder is ðx2 þ xÞ: Hence, a4 ¼ a2 þ a


For x5

The remainder is ðx2 þ x þ 1Þ: Hence, a5 ¼ a2 þ a þ 1: Similarly, x6 and x7


follow the same procedure to get the polynomial representations. All the above
values are tabulated in the following Table 3.5.
Example 3.7 Construct GF(16) as a vector space over GF(2).
Solution Let us consider the construction of GF(16) based on the polynomial
pð xÞ ¼ x4 þ x þ 1 which is primitive in GF(2). We know that

0¼0
a0 ¼ 1

Table 3.5 Representation of


elements of GF(8) Zero and Polynomial Vector space over
powers of a representation GF(2) 1aa2
0 0 000
a0 1 100
a1 a 010
a 2
a 2 001
a 3 1þa 110
a4 a þ a2 011
a5 1 þ a þ a2 111
a 6
1þa 2 101
a 7 1 100
60 3 Galois Field Theory

Since the reminders of x, x2, and x3 upon division by the primitive polynomial
pð xÞ ¼ x4 þ x þ 1 are themselves, the other three possible assignments are as
follows:

a1 ¼ x
a2 ¼ x 2
a3 ¼ x 3

However, the polynomial representation of x4 can be derived by the following


polynomial division:

The remainder is ðx þ 1Þ: Hence, a4 ¼ a þ 1.


Similarly, for the values from x5 to x15, follow the same procedure to get the
polynomial representations. The above values are tabulated in the following
Table 3.6.

Table 3.6 Representation of


elements of GF(16) Zero and Polynomials over Vector space over
powers of a GF(2) GF(2) 1aa2 a3
0 0 0000
a0 1 1000
a1 a 0100
a 2
a 2 0010
a 3
a 3 0001
a4 1þa 1100
a5 a þ a2 0110
a 6
a þa2 3 0011
a 7
1þaþa 3 1101
a8 1 þ a2 1010
a9 a þ a3 0101
a 10
1þaþa 2 1110
a 11
aþa þa 2 3 0111
a12 1 þ a þ a2 þ a3 1111
a13 1 þ a2 þ a3 1011
a 14
1þa 3 1001
a 15 1 1000
3.8 Construction of Galois Field GF(2m) from GF(2) 61

Example 3.8 Construct GF(32) as a vector space over GF(2).


Solution Let us consider the construction of GF(32) based on the polynomial
pð xÞ ¼ x5 þ x2 þ 1 which is primitive in GF(2).
We know that

0¼0
a0 ¼ 1

Since the reminders of x, x2, x3, and x4 upon division by the primitive polynomial
pð xÞ ¼ x5 þ x2 þ 1 are themselves, the other four possible assignments are as
follows:

a1 ¼ x
a2 ¼ x 2
a3 ¼ x 3
a4 ¼ x 4

However, the polynomial representation of x5 can be derived by the following


polynomial division:

The remainder is ðx2 þ 1Þ: Hence, a5 ¼ a2 þ 1.


Similarly, for the values from x6 to x31, follow the same procedure to get the
polynomial representations. All the above values are tabulated in the following
Table 3.7.
Example 3.9 Construct GF(64) as a vector space over GF(2).
Solution Let us consider the construction of GF(64) based on the polynomial
pð xÞ ¼ x6 þ x þ 1 which is primitive in GF(2).
We know that

0¼0
a0 ¼ 1
62 3 Galois Field Theory

Table 3.7 Representation of elements of GF(32)


Zero and powers of a Polynomials over GF(2) Vector space over GF(2) 1aa2 a3 a4
0 0 00000
a0 1 10000
a1 a 01000
a 2
a 2 00100
a3 a3 00010
a4 a4 00001
a 5 1þa 2 10100
a 6
aþa 3 01010
a7 a2 þ a4 00101
a8 1 þ a2 þ a3 10110
a 9
aþa þa 3 4 01011
a 10
1þa 4 10001
a11 1 þ a þ a2 11100
a12 a þ a2 þ a3 01110
a 13
a þa þa
2 3 4 00111
a 14
1þa þa þa
2 3 4 10111
a15 1 þ a þ a2 þ a3 þ a4 11111
a16 1 þ a þ a3 þ a4 11011
a 17
1þaþa 4 11001
a 18 1þa 11000
a19 a þ a2 01100
a20 a2 þ a3 00110
a 21
a þa3 4 00011
a 22
1þa þa 2 4 10101
a23 1 þ a þ a2 þ a3 11110
a24 a þ a2 þ a3 þ a4 01111
a 25 1þa þa 3 4 10011
a 26
1þaþa þa 2 4 11101
a27 1 þ a þ a3 11010
a28 a þ a2 þ a4 01101
a 29
1þa 3 10010
a 30
aþa 4 01001
a31 1 10000

Since the reminders of x; x2 ; x3 ; x4 and x5 upon division by the primitive polynomial


pð xÞ ¼ x6 þ x þ 1 are themselves, the other five possible assignments are as
follows:
3.8 Construction of Galois Field GF(2m) from GF(2) 63

a1 ¼ x
a2 ¼ x 2
a3 ¼ x 3
a4 ¼ x 4
a5 ¼ x 5

However, the polynomial representation of x6 can be derived by the following


polynomial division:

The remainder is ðx þ 1Þ: Hence, a6 ¼ a þ 1.


Similarly, for the values from x7 to x63, follow the same procedure to get the
polynomial representations. All the above values are tabulated in the following
Table 3.8.

3.8.1 Construction of GF(2m), Using MATLAB

To construct GF(2m), the following MATLAB function can be used


field = gftuple([-1:2^m-2]’,m,2);
For example, the GF(8) generated using the above m file for m = 3 is as follows:
field =
0 0 0
1 0 0
0 1 0
0 0 1
1 1 0
0 1 1
1 1 1
1 0 1
64 3 Galois Field Theory

Table 3.8 Representation of elements of GF(64)


Zero and powers of a Polynomials over GF(2) Vector space over GF(2) 1aa2 a3 a4 a5
0 1 0 0 0 0 0 0
α0 1 1 0 0 0 0 0
α1 α 0 1 0 0 0 0
α2 α2 0 0 1 0 0 0
α3 α3 0 0 0 1 0 0
α4 α4 0 0 0 0 1 0
α5 α5 0 0 0 0 0 1
α6 1+α 1 1 0 0 0 0
α7 α + α2 0 1 1 0 0 0
α8 α2 + α3 0 0 1 1 0 0
α9 α3 + α4 0 0 0 1 1 0
α10 α4 + α5 0 0 0 0 1 1
α11 1 + α + α5 1 1 0 0 0 1
α12 1 + α2 1 0 1 0 0 0
α13 α + α3 0 1 0 1 0 0
α14 α2 + α4 0 0 1 0 1 0
α15 α3 + α5 0 0 0 1 0 1
α16 1 + α + α4 1 1 0 0 1 0
α17 α + α2 + α5 0 1 1 0 0 1
α18 1 + α + α2 + α3 1 1 1 1 0 0
α19 α + α2 + α3 + α4 0 1 1 1 1 0
α20 α2 + α3 + α4 + α5 0 0 1 1 1 1
α21 1 + α + α3 + α4 + α5 1 1 0 1 1 1
α22 1 + α2 + α4 + α5 1 0 1 0 1 1
α23 1 + α3 + α5 1 0 0 1 0 1
α24 1 + α4 1 0 0 0 1 0
α25 α + α5 0 1 0 0 0 1
α26 1 + α + α5 1 1 1 0 0 0
α27 α + α2 + α3 0 1 1 1 0 0
α28 α2 + α3 + α4 0 0 1 1 1 0
α29 α3 + α4 + α5 0 0 0 1 1 1
α30 1 + α + α4 + α5 1 1 0 0 1 1
α31 1 + α2 + α5+ 1 0 1 0 0 1
α32 1 + α3 1 0 0 1 0 0
α33 α + α4 0 1 0 0 1 0
α34 α2 + α5 0 0 1 0 0 1
α35 1 + α + α3 1 1 0 1 0 0
α36 α + α2 + α4 0 1 1 0 1 0
α37 α2 + α3 + α5 0 0 1 1 0 1
α38 1 + α + α3 + α4 1 1 0 1 1 0
α39 α + α2 + α4 + α5 0 1 1 0 1 1
(continued)
3.9 Minimal Polynomials and Conjugacy Classes of GF(2m) 65

Table 3.8 (continued)


Zero and powers of a Polynomials over GF(2) Vector space over GF(2) 1aa2 a3 a4 a5
α40
1+α+α +α +α
2 3 5
1 1 1 1 0 1
α41 1 + α2 + α3 + α4 1 0 1 1 1 0
α42 α + α3 + α4 + α5 0 1 0 1 1 1
α43 1 + α + α2 + α4 + α5 1 1 1 0 1 1
α44 1 + α2 + α3 + α5 1 0 1 1 0 1
α45 1 + α3 + α4 1 0 0 1 1 0
α46 α + α4 + α5 0 1 0 0 1 1
α47 1 + α + α2 + α5 1 1 1 0 0 1
α48 1 + α2 + α3 1 0 1 1 0 0
α49 α + α3 + α4 0 1 0 1 1 0
α50 α2 + α4 + α5 0 0 1 0 1 1
α51 1 + α + α3 + α5 1 1 0 1 0 1
α52 1 + α2 + α4 1 0 1 0 1 0
α53 α + α3 + α5 0 1 0 1 0 1
α54 1 + α + α2 + α4 1 1 1 0 1 0
α55 α + α2 + α3 + α5 0 1 1 1 0 1
α56 1 + α + α2 + α3 + α4 1 1 1 1 1 0
α57 α + α2 + α3 + α4 + α5 0 1 1 1 1 1
α58 1 + α + α2 + α3 + α4 + α5 1 1 1 1 1 1
α59 1 + α2 + α3 + α4 + α5 1 0 1 1 1 1
α60 1 + α3 + 43 + α5 1 0 0 1 1 1
α61 1 + α4 + α5 1 0 0 0 1 1
α62 1 + α5 1 0 0 0 0 1
α63 1 1 0 0 0 0 0

3.9 Minimal Polynomials and Conjugacy Classes of GF(2m)

3.9.1 Minimal Polynomials

Definition 3.1 Suppose that α is an element in GF(2m). The unique minimal


polynomial of α with respect to GF(2) is a polynomial /ðxÞ of minimum degree
such that /ðaÞ ¼ 0.

3.9.2 Conjugates of GF Elements

Let a be an element in the Galois field GF(2m). The conjugates of a with respect to
2 3
the subfield GF(q) are the elements a; a2 ; a2 ; a2 ; . . .:
66 3 Galois Field Theory

The conjugates of a with respect to GF(q) form a set called the conjugacy class
of a with respect to GF(q).
Theorem 3.2 (Conjugacy Class) The conjugacy class of a 2 GFð2m Þ with respect
i
to GF(2) contains all the elements of the form a2 for 0  i  l  1 where l is the
l
smallest positive integer such that a2 ¼ a.

3.9.3 Properties of Minimal Polynomial

Theorem 3.3 The minimal polynomial of an element of a of GF(2m) is an irre-


ducible polynomial.
Proof Suppose the minimal polynomial, /ðxÞ, is not irreducible.
Then, /ðxÞ can be expressed as a product of two other polynomials

/ð xÞ ¼ /1 ðxÞ/2 ðxÞ

As /ðaÞ ¼ /1 ðaÞ/2 ðaÞ ¼ 0, either /1 ðaÞ ¼ 0 or /2 ðaÞ ¼ 0.


It is contradictory with the minimality of the degree /ðxÞ. h
Theorem 3.4 Let f(x) be a polynomial over GF(2) and /ð xÞ be the minimal poly-
nomial of an element a in GF(2m). If a is a root of f(x), then f(x) is divisible by /ð xÞ.
Proof The division of f(x) by /ð xÞ gives

f ð xÞ ¼ /ð xÞqð xÞ þ rðxÞ

Since a is a root of f(x), f ðaÞ ¼ 0 and /ðaÞ ¼ 0, it follows that r ðaÞ ¼ 0. As the
degree of rðxÞ is less than that of /ð xÞ,r ðaÞ ¼ 0 only when r ð xÞ ¼ 0. Hence,
f ð xÞ ¼ /ð xÞqð xÞ; therefore, f ð xÞ is divisible by /ð xÞ. h
m
1
Theorem 3.5 The nonzero elements of GF(2m) form all the roots of x2 1
Proof Let a be a nonzero elements in the field GF(2 ). Then, it follows that
m

a2 1 ¼ 1; or a2 1 þ 1 ¼ 0. This implies that a is a root of the polynomial x2 1 þ


m m m

1: Hence, every nonzero element of GF(2m) is a root of x2 1 þ 1: Since the degree


m

of x2 1 þ 1 is 2m  1, the 2m  1 nonzero elements of GF(2m) form all the roots of


m

2m 1
x þ 1: h
Theorem 3.6 Let a be an element in the Galois field F(2m). Then, all its conjugates
l1
a; a2 ; . . .; a2 have the same minimal polynomial.
A direct consequence of Theorem 3.5 is that x2 1  1 is equal to the product of
m

the distinct minimal polynomials of the nonzero elements of GF(2m).


3.9 Minimal Polynomials and Conjugacy Classes of GF(2m) 67

Theorem 3.7 Suppose that /ð xÞ be the minimal polynomial of an element a of


l
GF(2m) and l be the smallest positive integer such that a2 ¼ a, and then, /ð xÞ of
degree m or less is given by

l1 
Y 
i
/ ð xÞ ¼ x  a2 ð3:8Þ
i¼0

3.9.4 Construction of Minimal Polynomials

The stepwise procedure for the construction of the Galois field is as follows
Step 1: Generate the Galois field GF(2m) based on the primitive polynomial cor-
responding to m.
Step 2: Find the groups of the conjugate roots.
Step 3: The construction of minimal polynomial of each elements is by using
Eq. (3.8).
Using the above procedure, the following examples illustrate the construction of the
minimal polynomial for GF(8), GF(16), and GF(32) with respect to GF(2).
Example 3.10 Determine the minimal polynomials of the elements of GF(8) with
respect to GF(2).
Solution The eight elements in GF(8) are arranged in conjugacy classes and their
minimal polynomials computed as follows

Conjugacy class Associated minimal polynomial


f0g / ð x Þ ¼ x
f1g /0 ð x Þ ¼ x þ 1
 2 4
a; a ; a /1 ð xÞ ¼ ðx  aÞðx  a2 Þðx  a4 Þ ¼ x3 þ x þ 1
 3 6 5   
a ;a ;a /3 ð xÞ ¼ ðx  a3 Þ x  a6 x  a5 ¼ x3 þ x2 þ 1

From the Theorem 3.5, it is known that the minimal polynomials of the nonzero
elements in the field GF(8) provide the complete factorization of x7  1. Hence,
x7  1 ¼ ðx þ 1Þðx3 þ x þ 1Þðx3 þ x2 þ 1Þ.
Example 3.11 Determine the minimal polynomials of the elements of GF(16) with
respect to GF(2).
68 3 Galois Field Theory

Solution The 16 elements in GF(24) are arranged in conjugacy classes and their
associated minimal polynomials computed as follows:

Conjugacy class Associated minimal polynomial


f0g / ð x Þ ¼ x
f1g /0 ð x Þ ¼ x þ 1
 2 4 8    
a; a ; a ; a /1 ð xÞ ¼ ðx  aÞ x  a2 x  a4 x  a8
¼ x4 þ x þ 1
      
a ;a ;a ;a ;
3 6 12 9
/3 ð xÞ ¼ x  a3 x  a6 x  a12 x  a9
¼ x4 þ x3 þ x2 þ x þ 1
   
a ;a
5 10
/5 ð xÞ ¼ x  a5 ðx  a10 Þ ¼ x2 þ x þ 1
      
a7 ; a14 ; a13 ; a11 /9 ð xÞ ¼ x  a7 x  a14 x  a13 x  a11
¼ x4 þ x3 þ 1

As a consequence of the Theorem 3.5, the following factorization holds good for
GF(16)
    
x15  1 ¼ ðx þ 1Þ x4 þ x þ 1 x4 þ x3 þ x2 þ x þ 1 x2 þ x þ 1 x4 þ x3 þ 1

Example 3.12 Determine the minimal polynomials of the elements of GF(32) with
respect to GF(2).
Solution The 32 elements in GF(32) are arranged in conjugacy classes and their
minimal polynomials computed as follows:

Conjugacy class Associated minimal polynomial


f0g / ð x Þ ¼ x
f1g /0 ð x Þ ¼ x þ 1
 2 4 8 16      
a; a ; a ; a ; a /1 ð xÞ ¼ ðx  aÞ x  a2 x  a4 x  a8 x  a16
¼ x5 þ x2 þ 1
       
a3 ; a6 ; a12 ; a24 ; a17 /3 ð xÞ ¼ x  a3 x  a6 x  a12 x  a24 x  a17
¼ x5 þ x4 þ x3 þ x2 þ 1
       
a ;a ;a ;a ;a
5 10 20 9 18
/5 ð xÞ ¼ x  a5 x  a10 x  a20 x  a9 x  a18
¼ x5 þ x4 þ x2 þ x þ 1
       
a ;a ;a ;a ;a
7 14 28 25 19
/9 ð xÞ ¼ x  a7 x  a14 x  a28 x  a25 x  a19
¼ x5 þ x3 þ x2 þ x þ 1
(continued)
3.9 Minimal Polynomials and Conjugacy Classes of GF(2m) 69

(continued)
Conjugacy class Associated minimal polynomial
 11 22 13 26 21       
a ;a ;a ;a ;a /11 ð xÞ ¼ x  a11 x  a22 x  a13 x  a26 x  a21
¼ x5 þ x4 þ x3 þ x þ 1
       
a15 ; a30 ; a29 ; a27 ; a 23
/15 ð xÞ ¼ x  a15 x  a30 x  a29 x  a27 x  a23
¼ x5 þ x3 þ 1

According to the Theorem 3.5, the following factorization is valid for GF(32)
   
x31  1 ¼ ðx þ 1Þ x5 þ x2 þ 1 x5 þ x4 þ x3 þ x2 þ 1 x5 þ x4 þ x2 þ x þ 1
 5   
x þ x3 þ x2 þ x þ 1 x5 þ x4 þ x3 þ x þ 1 x5 þ x3 þ 1

3.9.5 Construction of Conjugacy Classes Using MATLAB

cst = cosets(m)
The MATLAB command can be used to find the conjugacy classes for the nonzero
elements in GF(8).
For example, for m = 3, the conjugacy classes are generated using the above
MATLAB command that is given as follows
c = cosets(3);
c{1}′
c{2}′
c{3}′  
c{1}′ displays the conjugacy class a0 which indicates the nonzero element 1 that
represents a0 .  
c{2}′ displays the conjugacy class a2 ; a4 ; a6 which indicates the nonzero ele-
ments 2, 4, and 6 that represent a; a
2
and a2 þa, respectively.
c{3}′ displays the conjugacy class a3 ; a5 ; a7 which indicates the nonzero ele-
ments 3, 5, and 7 that represent a þ 1; a2 þ a þ 1 and 1, respectively.

3.9.6 Construction of Minimal Polynomials Using MATLAB

The conjugacy classes of the elements of GF(2m) and associated minimal poly-
nomials can be constructed using the MATLAB commands cosets and minpol. For
example, for GF(24), the following MATLAB program constructs the minimal
polynomial of the conjugacy class in which α7 is an element.
70 3 Galois Field Theory

Primitive polynomial(s) =
x^4 + x^1 + 1
pol = GF(2) array.
Array elements =
1 1 0 0 1
1 1 0 0 1
1 1 0 0 1
1 1 0 0 1

From the output, array elements indicate the coefficients of the minimal polyno-
mials in the descending order for four elements in the conjugacy class. Hence, the
minimal polynomial for the conjugacy class which α7 is an element is given by
/ð xÞ ¼ x4 þ x3 þ 1:

3.10 Problems

1. Construct modulo- 5 addition and multiplication tables for GF(5).


2. Divide the polynomial f ðxÞ ¼ 1 þ x þ x4 þ x5 þ x6 by the polynomial gðxÞ ¼
1 þ x þ x3 in GF(2).
3. Find whether each of the following polynomial is irreducible in GF (2).
(a) pð xÞ ¼ x2 þ x þ 1
(b) pð xÞ ¼ x11 þ x2 þ 1
(c) pð xÞ ¼ x21 þ x2 þ 1
4. Find whether each of the following polynomial is primitive in GF (2).
(a) pð xÞ ¼ x4 þ x3 þ x2 þ x þ 1
(b) pð xÞ ¼ x8 þ x4 þ x3 þ x2 þ 1
(c) pð xÞ ¼ x12 þ x6 þ x4 þ x þ 1
3.10 Problems 71

5. Construct GF(128) as a vector space over GF(2).


6. When the 64 elements in GF(26) are arranged in conjugacy classes and their
associated minimal polynomials. Find the minimal polynomial of the conjugacy
class in which α7 is an element.
Chapter 4
Linear Block Codes

This chapter deals with linear block codes covering their fundamental concepts,
generator and parity check matrices, error-correcting capabilities, encoding and
decoding, and performance analysis. The linear block codes discussed in this
chapter are Hamming codes, cyclic codes, binary BCH codes, and Reed–Solomon
(RS) codes.

4.1 Block Codes

The data stream is broken into blocks of k bits, and each k-bit block is encoded into
a block of n bits with n > k bits as illustrated in Fig. 4.1. The n-bit block of the
channel block encoder is called the code word. The code word is formed by adding
ðn  kÞ parity check bits derived from the k message bits.
Some important Properties of block codes are defined as
Block Code Rate
The block code rate (R) is defined as the ratio of k message bits and length of the
code word n.

R ¼ k=n ð4:1Þ

Code Word Weight


The weight of a code word or error pattern is the number of nonzero bits in the code
word or error pattern. For example, the weight of a code word c ¼
ð1; 0; 0; 1; 1; 0; 1; 0Þ is 4.
Hamming Distance
The Hamming distance between two blocks v and w is the number of coordinates in
which the two blocks differ.

Electronic supplementary material The online version of this chapter (doi:10.1007/978-81-


322-2292-7_4) contains supplementary material, which is available to authorized users.

© Springer India 2015 73


K. Deergha Rao, Channel Coding Techniques for Wireless Communications,
DOI 10.1007/978-81-322-2292-7_4
74 4 Linear Block Codes

Data Blocks

Channel Block Encoder

( )parity check bits

Coded Data Blocks

Fig. 4.1 Coded data stream

dHamming ðv; wÞ ¼ dðv; wÞ ¼ jfijvi 6¼ wi ; i ¼ 0; 1; . . .; n  1gj ð4:2Þ

Example 4.1 Consider the code words v ¼ ð00100Þ and w ¼ ð10010Þ; then, the
Hamming distance dHamming ðv; wÞ ¼ 3: Hamming distance allows for a useful
characterization of the error detection and error-correction capabilities of a block
code as a function of the code’s minimum distance.
The Minimum Distance of a Block Code
The minimum distance of a block code C is the minimum Hamming distance
between all distinct pairs of code words in C.
A code with minimum distance dmin can thus detect all error patterns of weight
less than or equal to ðdmin  1Þ:
A code with minimum distance dmin can correct all error patterns of weight less
than or equal to ½ðdmin  1Þ=2:
Example 4.2 Consider the binary code C composed of the following four code
words.

C ¼ fð00100Þ; ð10010Þ; ð01001Þ; ð11111Þg

Hamming distance of ð00100Þ and ð10010Þ ¼ 3


Hamming distance of ð10010Þ and ð01001Þ ¼ 4
Hamming distance of ð00100Þ and ð01001Þ ¼ 3
4.1 Block Codes 75

Hamming distance of ð10010Þ and ð11111Þ ¼ 3


Hamming distance of ð00100Þ and ð11111Þ ¼ 4
Hamming distance of ð01001Þ and ð11111Þ ¼ 3
Therefore, the minimum distance dmin ¼ 3:

4.2 Linear Block Codes

A block code C consisting of n-tuples fðc0 ; c1 ; . . .; cn1 Þg of symbols from GF(2) is


said to be binary linear block code if and only if C forms a vector subspace over GF
(2). The code word is said to be systematic linear code word, if each of the 2k code
words is represented as linear combination of k linearly independent code words.

4.2.1 Linear Block Code Properties

The two important properties of linear block codes are


Property 1: The linear combination of any set of code words is a code word.
Property 2: The minimum distance of a linear block code is equal to the
minimum weight of any nonzero word in the codeThe two well-
known bounds on the minimum distance are
1. Singleton Bound
The minimum distance of a (n, k) linear block code is bounded
by

dmin  n  k þ 1 ð4:3aÞ

2. Hamming Bound
An (n, k) block code can correct up to tec errors per code word,
provided that n and k satisfy the Hamming bound.
tec  
X n
2 nk
 ð4:3bÞ
i¼0
i

The relation is the upper bound on the dmin and is known as the
Hamming bound. Where
 
n n!
¼ ; tec ¼ ðdmin  1Þ=2
i ðn  1Þ!i!
76 4 Linear Block Codes

4.2.2 Generator and Parity Check Matrices

Let fg0 ; g1 ; . . .; gk1 g be a basis of code words for the (n, k) linear block code C and
m ¼ ðm0 ; m1 ; . . .; mk1 Þ the message to be encoded. It follows from Theorem 3.1
that the code word c ¼ ðc0 ; c1 ; . . .; cn1 Þ for the message is uniquely represented by
the following linear combination of g0 ; g1 ; . . .; gk1

c ¼ m0 g0 þ    þ mk1 gk1 ð4:4Þ

for every code word c 2 C: Since every linear combination of the basis elements
must also be a code word, there is a one-to-one mapping between the set of k-bit
blocks ða0 ; a1 ; . . .; ax1 Þ over GF(2) and the code words in C. A matrix G is
constructed by taking the vectors in the basis as its rows.
2 3 2 3
g0 g0;0 g0;1  g0;n1
6 g1 7 6 g1;0 g1;1  g1;n1 7
6 7 6 7
G¼6 .. 7¼6 .. .. .. .. 7 ð4:5Þ
4 . 5 4 . . . . 5
gk1 gk1;0 gk1;1 . . . gk1;n1

This matrix is a generator matrix for the code C. It can be used to directly encode
k-bit blocks in the following manner.
2 3
g0
6 g1 7
6 7
mG ¼ ðm0 ; m1 ; . . .; mk1 Þ6 .. 7 ¼ m0 g0 þ m1 g1 þ    þ mk1 gk1 ¼ c
4 . 5
gk1

The dual space of a linear block code C is the dual code of C, and a basis
fh0 ; h1 ; . . .; hnk1 g can be found for dual code of C, and the following parity check
matrix can be constructed.
2 3 2 3
h0 h0;0 h0;1  h0;n1
6 h1 7 6 h1;0 h1;1  h1;n1 7
6 7 6 7
H¼6 .. 7¼6 .. .. .. .. 7 ð4:6Þ
4 . 5 4 . . . . 5
hnk1 hnk1;0 hnk1;1 . . . hnk1;n1

In a systematic linear block code, the last k bits of the code word are the message
bits, that is,

ci ¼ miðnkÞ ; i ¼ n  k; . . .; n ð4:7Þ

The last n  k bits in the code word are check bits generated from the k message
bits according to
4.2 Linear Block Codes 77

c0 ¼ p0;0 m0 þ p1;0 m1 þ    þ pk1;0 mk1


c1 ¼ p0;1 m0 þ p1;1 m1 þ    þ pk1;1 mk1
..
.
cnk1 ¼ p0;nk1 m0 þ p1;nk1 m1 þ    þ pk1;nk1 mk1

The above equations can be written in matrix form as


2 3
p0;0 p0;1 ... p0;nk1 1000 ... 0
6 p1;0 p1;1 ... p1;nk1 0100 ... 07
6 7
½c0 ; c1 ; . . .; cn  ¼ ½m0 ; m1 ; . . .; mk1 6
6 .. .. .. ..
7
7
4. . . . 5
pk1;0 pk1;1 ... pk1;nk1 0000 ... 1 kn
ð4:8Þ

or

c ¼ mG ð4:9Þ

where G is the matrix on the right-hand side of Eq. (4.8). The k × n matrix G is
called the generator matrix of the code, and it has the form

.
G ¼ ½P .. Ik kn ð4:10Þ

The matrix Ik is the identity matrix of order k and p is an orbitrary k by n  k


matrix. When p is specified, it defines the ðn; kÞ block code completely. The parity
check matrix H corresponding to the above generator matrix G can be obtained as
2 3
1 0 0 0 ... 0 p0;0 p1;0    pk1;0
60 1 0 0 ... 0 p0;1 p1;1    pk1;1 7
6 7
H ¼ 6. .. .. .. 7 ð4:11Þ
4 .. . . . 5
0 0 0 0 ... 1 p0;nk1 p1;nk1    pk1;nk1
 
.
H ¼ Ink .. PT ð4:12Þ

The Parity Check Theorem


The parity check theorem states that “For an (n, k) linear block code C with
ðn  kÞ  n parity check matrix H; a code word c 2 C is a valid code word if and
only if cH T ¼ 0:”
78 4 Linear Block Codes

Example 4.3 Consider the following generator matrix of (7,4) block code. Find the
code vector for the message vector m ¼ ð1110Þ, and check the validity of code
vector generated.
2 3
1 1 0j 1 0 0 0
60 1 1j 0 1 0 07
G¼6
41
7
1 1j 0 0 1 05
1 0 1j 0 0 0 1

Solution The code vector for the message block m ¼ ð1110Þ is given by
2 3
1 1 0 1 0 0 0
60 07
6 1 1 0 1 0 7
c ¼ mG ¼ ð 1 1 1 0 Þ6 7
41 1 1 0 0 1 05
1 0 1 0 0 0 1
¼ ð0 1 0 1 1 1 0 Þ
2 3
1 0 0j 1 0 1 1
6 7
H ¼ 40 1 0j 1 1 1 05
0 0 1j 0 1 1 1
2 3
1 0 0
60 07
6 1 7
6 7
60 0 17
6 7
6 7
cH ¼ ½ 0
T
1 0 1 1 1 0 6 1 1 07 ¼ ½0 0 0
6 7
60 1 17
6 7
6 7
41 1 15
1 0 1

Hence, the generated code vector is valid.

4.2.3 Weight Distribution of Linear Block Codes

An ðn; kÞ code contains 2k code words with the Hamming weights between 0 and
n. For 0  i  n; let Wj be the number of code words in C with Hamming weight j:
The w0 ; w1 ; . . .; wn1 are the weight distribution of C so that w0 þ w1 þ w2 þ
   þ wn ¼ 2k : The weight distribution can be written as the polynomial W ð xÞ ¼
w0 þ w1 x þ w2 x2 þ    þ wn1 xn1 which is called as weight enumerator. The
weight distribution of a linear block code is related to the parity check matrix H by
the following theorem,
4.2 Linear Block Codes 79

“The minimum weight (or minimum distance) of an ðn; kÞ linear block code with
a parity check matrix H is equal to the minimum number of nonzero columns in H
whose vector sum is a zero vector.”

4.2.4 Hamming Codes

Hamming code is a linear block code capable of correcting single errors having a
minimum distance dmin ¼ 3: It is very easy to construct Hamming codes. The parity
check matrix H must be chosen so that no row in H T is zero and the first ðn  kÞ
rows of H T form an identity matrix and all the rows are distinct.
We can select 2nk  1 distinct rows of H T : Since the matrix H T has n rows, for
all of them to be distinct, the following inequality should be satisfied

2nk  1  n ð4:13Þ

implying that

ðn  kÞ  log2 ðn þ 1Þ
ð4:14Þ
n  k þ log2 ðn þ 1Þ

Hence, the minimum size n for the code words can be determined from
Eq. (4.14).
Example 4.4 Design a Hamming code with message block size of eleven bits.
Solution It follows from Eq. (4.14) that

n  11 þ log2 ðn þ 1Þ

The smallest n that satisfies the above inequality is 15; hence, we need a (15,11)
block code. Thus, the transpose of the parity check matrix H will be 4 by 15 matrix.
The first four rows of H T will be 4 × 4 identity matrix. The last eleven rows are
arbitrarily chosen, with the restrictions that no row is zero, and all the rows are
distinct.
2 3
1 0 0 0 0 0 0 0 1 1 1 1 1 1 1
60 1 0 0 1 1 1 0 0 0 0 1 1 1 17
H¼6
40
7
0 1 0 0 1 1 1 0 1 1 0 0 1 15
0 0 0 1 1 0 1 1 1 0 1 0 1 0 1
80 4 Linear Block Codes

2 3
1 0 0 0
6 0 1 0 0 7
6 7
6 0 0 1 0 7
6 7
6 0 0 0 1 7
6 7
6... ... ... ...7
6 7
6 0 1 0 1 7
6 7
6 0 1 1 0 7 2 3
6 7 Ink
6 0 1 1 7
1 7 4
HT ¼ 6
6 0 ¼ ... 5
6 0 1 1 7
7
6 1 PT
6 0 0 1 7
7
6 1 0 1 0 7
6 7
6 1 0 1 1 7
6 7
6 1 1 0 0 7
6 7
6 1 1 0 1 7
6 7
4 1 1 1 0 5
1 1 1 1

Then, the generator matrix G is


2 3
0 1 0 1 1 0 0 0 0 0 0 0 0 0 0
60 1 1 0 0 1 0 0 0 0 0 0 0 0 07
6 7
60 1 1 1 0 0 1 0 0 0 0 0 0 0 07
6 7
60 0 1 1 0 0 0 1 0 0 0 0 0 0 07
6 7
61 0 0 1 0 0 0 0 1 0 0 0 0 0 07
6 7
G¼6
61 0 1 0 0 0 0 0 0 1 0 0 0 0 077
61 0 1 1 0 0 0 0 0 0 1 0 0 0 07
6 7
61 1 0 0 0 0 0 0 0 0 0 1 0 0 07
6 7
61 1 0 1 0 0 0 0 0 0 0 0 1 0 07
6 7
41 1 1 0 0 0 0 0 0 0 0 0 0 1 05
1 1 1 1 0 0 0 0 0 0 0 0 0 0 1

Example 4.5 Construct parity check and generator matrices for a (7,4) Hamming
code.
Solution The parity check matrix (H) and generator matrix (G) for a (7,4)
Hamming code are
2 3
1 0 0 1 0 1 1
6 7
H ¼ 40 1 0 1 1 1 05
0 0 1 0 1 1 1
2 3
1 1 0 1 0 0 0
60 1 1 07
6 0 1 0 7
G¼6 7
41 1 1 0 0 1 05
1 0 1 0 0 0 1
4.2 Linear Block Codes 81

4.2.5 Syndrome Table Decoding

Consider a valid code word c for transmission, and let e be an error pattern
introduced by the channel during transmission. Then, the received vector r can be
written as

r ¼cþe ð4:15aÞ

Multiplying the r by the transpose of the parity check matrix gives the syndrome
S which can be expressed as

S ¼ rH T
¼ ðc þ eÞH T
¼ cH T þ eH T ð4:15bÞ
¼ 0 þ eH T

¼ eH T

Thus, the syndrome vector is independent of the transmitted code word c and is
only a function of the error pattern e: Decoding is performed by computing the
syndrome of a received vector, looking up the corresponding error pattern, and
subtracting the error pattern from the received word.
Example 4.6 Construct a syndrome decoding table for a (7,4) Hamming code.
Solution For a (7,4) Hamming code, there are 2ð74Þ error patterns (e) as follows

0 0 0 0 0 0 0
1 0 0 0 0 0 0
0 1 0 0 0 0 0
0 0 1 0 0 0 0
0 0 0 1 0 0 0
0 0 0 0 1 0 0
0 0 0 0 0 1 0
0 0 0 0 0 0 1

The syndrome for (7,4) Hamming code is computed using the parity check
matrix H as given in solution of Example 4.4 as follows

s ¼ e  HT

Thus, the syndrome decoding table for a (7,4) Hamming code is as follows
(Table 4.1).
82 4 Linear Block Codes

Table 4.1 Syndrome


Error pattern Syndrome
decoding table for a (7,4)
Hamming code 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 1 0 0
0 1 0 0 0 0 0 0 1 0
0 0 1 0 0 0 0 0 0 1
0 0 0 1 0 0 0 1 1 0
0 0 0 0 1 0 0 0 1 1
0 0 0 0 0 1 0 1 1 1
0 0 0 0 0 0 1 1 0 1

4.2.5.1 Hamming Codes Decoding

Syndrome table is used to decode the Hamming codes. The syndrome table gives
the syndrome value based on the simple relationship with parity check matrix. The
single error-correcting codes, i.e., Hamming codes, are decoded by using syndrome
value. Consider a code word c corrupted by e an error pattern with a single one in
the jth coordinate position results a received vector r. Let fh0 ; h1 ; . . .; hn1 g be the
set of columns of the parity check matrix H. When the syndrome is computed, we
obtain the transposition of the jth column of H.
2 3
hT0
6 hT1 7
6 7
s ¼ rH T ¼ eH T ¼ ð0; . . .; 0; 1; 0; . . .; 0Þ6 .. 7 ¼ hTj ð4:16Þ
4 . 5
hTn1

The above-mentioned process can be implemented using the following


algorithm.
1. Compute the syndrome s for the received word. If s ¼ 0, the received code word
is the correct code word.
2. Find the position j of the column of H that is the transposition of the syndrome.
3. Complement the jth bit in the received code word to obtain the corrected code
word.

Example 4.7 Decode the received vector r ¼ ð010000000000000Þ using the


(15,11) parity check matrix.
Solution
2 3
1 0 0 0 0 0 0 0 1 1 1 1 1 1 1
60 1 0 0 1 1 1 0 0 0 0 1 1 1 17
H¼6
40
7
0 1 0 0 1 1 1 0 1 1 0 0 1 15
0 0 0 1 1 0 1 1 1 0 1 0 1 0 1
4.2 Linear Block Codes 83

The received vector is r ¼ ð010000000000000Þ


The corresponding syndrome s ¼ r  H T is

s ¼ ð0100Þ

The syndrome is the transposition of 1st column of H. Inverting the 1st coor-
dinate of r, the following code word is obtained

c ¼ ð000000000000000Þ

Example 4.8 Decode the received vector r ¼ ð001100011100000Þ vector using the
(15,11) parity check matrix vector.
Solution The received vector is r ¼ ð001100011100000Þ. The corresponding
syndrome s ¼ r  H T is s ¼ ð0011Þ. The syndrome is the transposition of 7th
column of H. Inverting the 7th coordinate of r, the following code word is obtained

c ¼ ð001100001100000Þ

4.3 Cyclic Codes

An ðn; kÞ linear block code C is said to be a cyclic code if for every codeword
c ¼ ðc0 ; c1 ; . . .; cn2 ; cn1 Þ  C; there is also a codeword c1 ¼
ðcn1 ; c0 ; c1 ; . . .; cn2 Þ obtained by shifting c cyclically one place to the right is
also code word in C.

4.3.1 The Basic Properties of Cyclic Codes

Property 1: In an ðn; kÞ cyclic code, there exists a unique polynomial called


generator polynomial gð xÞ of minimal degree ðn  kÞ of the
following form:

gð xÞ ¼ g1 x þ g2 x2 þ    þ gnk1 xnk1 þ gnk xnk ð4:17Þ

Property 2: Every code polynomial in an ðn; kÞ cyclic code is multiple of gð xÞ.


Thus, it can be expressed as cð xÞ ¼ mð xÞgð xÞ, where mð xÞ is a
polynomial over GF(2) of degree k  1 or less.
Property 3: The generator polynomial gð xÞ of an ðn; kÞ cyclic code over GF(2)
divides xn þ 1:
84 4 Linear Block Codes

Property 4: The generator polynomial gð xÞ and the parity check matrix hðxÞ are
factor of the polynomial 1 þ xn :

In modulo-2 arithmetic 1 þ xn has the same value 1  xn .


Example 4.9 Let C1 be the binary cyclic code of length 15 generated by
gð xÞ ¼ x5 þ x4 þ x2 þ 1. Compute the code polynomial in C1 and the associated
code word for the message polynomial mð xÞ ¼ x9 þ x4 þ x2 þ 1 using the poly-
nomial multiplication encoding technique.
Solution Here

mð xÞ ¼ x9 þ x4 þ x2 þ 1; gð x Þ ¼ x 5 þ x 4 þ x 2 þ 1

code polynomial

cð xÞ ¼ mð xÞgð xÞ ¼ x14 þ x13 þ x11 þ x8 þ x7 þ x5 þ x4 þ 1

Code word = (100011011001011).


Example 4.10 Let C1 be the binary cyclic code of length 15 generated by
gð xÞ ¼ x5 þ x4 þ x2 þ 1. Determine the dimensions of C1 , and compute the number
of code words in C1 .
Solution Since the order of the generator polynomial is 5, the C1 has dimension
(15,10) with k ¼ ð15  5Þ ¼ 10 and contains 215−5 code words.
Example 4.11 Let C1 be the binary cyclic code of length 15 generated by
gð xÞ ¼ x5 þ x4 þ x2 þ 1. Compute the parity check polynomial for C1 , and show
that gð xÞ is a valid generator polynomial.
Solution gð xÞ ¼ x5 þ x4 þ x2 þ 1. The parity check polynomial for C1 is

x15 þ 1
h ð xÞ ¼ ¼ x10 þ x9 þ x8 þ x6 þ x5 þ x2 þ 1
gðxÞ

gðxÞ is valid generator polynomial since it has the minimum polynomials x4 þ x þ 1


and x þ 1 as factors, i.e., gð xÞ ¼ ðx4 þ x þ 1Þð x þ 1Þ.

4.3.2 Encoding Algorithm for an ðn; kÞ Cyclic Codes

In an ðn; kÞ cyclic code C with generator polynomial gðxÞ; let m ¼


ðm0 ; m1 ; . . .; mk1 Þ is the message block. By multiplying the message polynomial
4.3 Cyclic Codes 85

mðxÞ by xnk , we obtain a polynomial xnk mð xÞ ¼ m0 xnk þ m1 xnkþ1 þ    þ


mk1 xn1 of degree n  1 or less. Now, dividing xnk mð xÞ by gðxÞ yields

xnk mð xÞ ¼ qð xÞgð xÞ þ pð xÞ ð4:18Þ

where qðxÞ and pðxÞ are the quotient and remainder, respectively.
Equation (4.18) can be rearranged as

pð xÞ þ xnk mð xÞ ¼ qð xÞgð xÞ ð4:19Þ

Equation (4.19) shows that pð xÞ þ xnk mð xÞ is divisible by gðxÞ. Hence, it must


be a valid code polynomial cð xÞ ¼ pð xÞ þ xnk mð xÞ of the ðn; kÞ cyclic code C with
generator polynomial gðxÞ. The n-tuple representation of the code polynomial cðxÞ
is

c ¼ ðp0 ; p1 ; . . .; pnk1 ; m0 ; m1 ; . . .; mk1 Þ ð4:20Þ

The systematic encoding algorithm is summarized as


Step 1: Multiply the message polynomial mðxÞ by xnk
Step 2: Divide the result of Step 1 by the generator polynomial gðxÞ. Let dðxÞ be the
remainder.
Step 3: Set cð xÞ ¼ xnk mð xÞ  dðxÞ.

Example 4.12 Let C1 be the binary cyclic code of length 15 generated by


gð xÞ ¼ x5 þ x4 þ x2 þ 1. Compute the code polynomial in C1 and the associated
code word for the message polynomial mð xÞ ¼ x8 þ x7 þ x6 þ x5 þ x4 using the
systematic encoding technique. Verify that the message has been systematically
encoded.
Solution

gð xÞ ¼ x5 þ x4 þ x2 þ 1; mð xÞ ¼ x8 þ x7 þ x6 þ x5 þ x4

 
Step 1: x5 mð xÞ ¼ x5 x8 þ x7 þ x6 þ x5 þ x4 ¼ x13 þ x12 þ x11 þ x10 þ x9
86 4 Linear Block Codes

x8 þ x6 þ x5 þ x2 þ 1

Step 2 x þ x þ x þ 1 x13 þ x12 þ x11 þ x10 þ x9
5 4 2

x13 þ x12 þ x10 þ x5


x11 þ x9 þ x8
x þ x10 þ x8 þ x6
11

x10 þ x9 þ x6
:
x þ x9 þ x7 þ x5
10

x7 þ x6 þ x5
x þ x6 þ x4 þ x2
7

x5 þ x4 þ x2
x þ x4 þ x2 þ 1
5

1 ¼ dðxÞ
Step 3: cm ð xÞ ¼ x13 þ x12 þ x11 þ x10 þ x9 þ 1 $ cm ¼ ð10000011111110Þ:

Example 4.13 Construct parity check and generator matrices for binary cyclic code
of length 15 generated by gð xÞ ¼ x5 þ x4 þ x2 þ 1.
Solution The systematic generator matrix is obtained by selecting as rows those
code words associated with the message blocks (1000000000), (0100000000),
(0010000000), (0001000000), (0000100000), (0000010000), (0000001000),
(0000000100), (0000000010), and (1000000001).
mðxÞ Code polynomial cðxÞ Codeword
1 1 þ x2 þ x4 þ x5 $ ð101011000000000Þ
x 1 þ x þ x2 þ x3 þ x4 þ x6 $ ð111110100000000Þ
x2 1 þ x þ x3 þ x7 $ ð110100010000000Þ
x3
xþx þx þx2 4 8 $ ð011010001000000Þ
x4 1 þ x3 þ x4 þ x9 $ ð100110000100000Þ
x5 1 þ x þ x2 þ x10 $ ð111000000010000Þ
x6 x þ x2 þ x3 þ x11 $ ð011100000001000Þ
x7
x þx þx þx
2 3 4 12 $ ð001110000000100Þ
x8 1 þ x2 þ x3 þ x13 $ ð101100000000010Þ
x9 x þ x3 þ x4 þ x14 $ ð010110000000001Þ
4.3 Cyclic Codes 87

The generator matrix ðGÞ and parity check matrix ðH Þ for the cyclic code are
2 3
1 0 1 0 1 1 0 0 0 0 0 0 0 0 0
61 1 1 1 1 0 1 0 0 0 0 0 0 0 07
6 7
61 1 0 1 0 0 0 1 0 0 0 0 0 0 07
6 7
60 1 1 0 1 0 0 0 1 0 0 0 0 0 07
6 7
61 0 0 1 1 0 0 0 0 1 0 0 0 0 07
G¼6
61
7
6 1 1 0 0 0 0 0 0 0 1 0 0 0 077
60 1 1 1 0 0 0 0 0 0 0 1 0 0 07
6 7
60 0 1 1 1 0 0 0 0 0 0 0 1 0 07
6 7
41 0 1 1 0 0 0 0 0 0 0 0 0 1 05
0 1 0 1 1 0 0 0 0 0 0 0 0 0 1

The corresponding parity check matrix is


2 3
1 0 0 0 0 1 1 1 0 1 1 0 0 1 0
60 1 0 0 0 0 1 1 1 0 1 1 0 0 17
6 7
H¼6
60 0 1 0 0 1 1 0 1 0 1 1 1 1 077
40 0 0 1 0 0 1 1 0 1 0 1 1 1 15
0 0 0 0 1 1 1 0 1 1 0 0 1 0 1

4.3.3 Encoder for Cyclic Codes Using Shift Registers

The systematic encoder for cyclic codes is shown in Fig. 4.2. The rectangular boxes
represent flip-flops which reside either in 0 or 1 state. The encoder operation is as
follows.
1. The switches are placed in position in 1. The k message bits are sent to the
modulator and placed at the end of the systematic code word. As soon as the kth
message bit is fed into the shift register, the flip-flops of the shift register contain
ðn  kÞ parity bits.
2. The switches are moved to the position 2 to break the feedback connection.
3. The parity bits in the shift register are shifted out into the transmitter to form the
parity bits of the systematic code word.

Example 4.14 Construct the shift register encoder for a cyclic code of length 7
generated by gð xÞ ¼ x4 þ x3 þ x2 þ 1, and obtain the code word for message
m ¼ ð010Þ:
Solution The shift register for encoding the (7,3) cyclic code with generator
polynomial gðxÞ ¼ x4 þ x3 þ x2 þ 1 is shown in Fig. 4.3. The given message bits
are 010. The contents of the shift register are shown in Table 4.2. Hence, the four
parity check bits are 0111. Therefore, the code word output is 0111010.
88 4 Linear Block Codes

1 2
g g

- -
c + c c + c +

Message
Block Input
1 2
Code Word
Output

Fig. 4.2 Encoding circuit for (n, k) cyclic code

Message Block Input

Code Word Output

Fig. 4.3 Encoder for an (7,3) cyclic code generated by gðxÞ ¼ x4 þ x3 þ x2 þ 1

Table 4.2 Contents of the


Shift Input Register code words
shift register in the encoder of
Fig. 4.3 for message sequence 0 0 0 0
(010) 1 0 0 0 0 0
2 1 1 0 1 0
3 0 0 1 1 1
4.3 Cyclic Codes 89

4.3.4 Shift Register Encoders for Cyclic Codes

Suppose the code word ðc0 ; c1 ; . . .; cn1 Þ is transmitted over a noisy channel
resulting in the received word ðr0 ; r1 ; . . .; rn1 Þ. Let the received word be repre-
sented by a polynomial of degree n  1 or less as

r ð xÞ ¼ r0 þ r1 x þ    þ rn1 xn1 ð4:21Þ

Dividing the rðxÞ by gðxÞ results in the following

r ð xÞ ¼ qð xÞgð xÞ þ sð xÞ ð4:22Þ

where qðxÞ is the quotient and sðxÞ is the remainder known as syndrome. The sðxÞ is
a polynomial of degree n  k  1 or less, and its coefficients make up the ðn  1Þ-
by-1 syndrome s: An error in the received word is detected only when the syndrome
polynomial sð xÞ is nonzero.
Syndrome Calculator
The syndrome calculator shown in Fig. 4.4 is similar to the encoder shown in the
Fig. 4.2. The only difference is that the received bits are fed from left into the
ðn  kÞ stages of the feedback shift register. At the end of the last received bit
shifting, the contents of the shift register contain the desired syndrome s. If the
syndrome is zero, there are no transmission errors in the received word or else the
received code word contains transmission error. By knowing the value of syn-
drome, we can determine the corresponding error pattern and also make the
appropriate correction.
Example 4.15 Consider the (7,4) Hamming code generator polynomial gðxÞ ¼
x3 þ x þ 1 and the transmitted code word 1100101. Show the fifth bit of the
received word is an error (Table 4.3).

Fig. 4.4 Syndrome calculator


90 4 Linear Block Codes

Gate

Received
bits + +

Flip-flop Modulo-2
adder

Fig. 4.5 Syndrome calculator of Example 4.15

Table 4.3 Contents of the


Shift Input bit Contents of shift register
shift register in the encoder of
Fig. 4.5 000
1 1 100
2 0 010
3 1 101
4 0 100
5 1 110
6 1 111
7 1 001

Solution Given gðxÞ ¼ x3 þ x þ 1


Transmitted code word = 1100101
By considering the fifth bit as an error, the received word = 1110101.

At the end of the seventh shift, the contents of the shift register (syndrome) is
001. The nonzero value of the syndrome indicates the error, and the error pattern for
the syndrome 001 is 0010000 from the Table 4.1. This shows that the fifth bit of the
received word is an error.

4.3.5 Cyclic Redundancy Check Codes

Cyclic redundancy check (CRC) code is a cyclic code used for error detection. CRC
codes are implemented from cyclic codes and hence the name, even when they are
generally not cyclic. The following three CRC codes given in Table 4.4 have
become international standard.
4.4 BCH Codes 91

Table 4.4 International standard CRC codes


CRC code Description Error detection capability Burst error
detection
capability
CRC-12 gðxÞ ¼ x12 þ x11 þ x3 þ x2 þ x þ 1 One-bit errors, two- and All burst
three-bit errors of length errors up to
¼ ðx þ x þ 1Þðx þ 1Þ
11 2
up to 2047. All error length 12
Code length: 2047 pattern with an odd
Number of parity bits: 12 number of error if the
dmin ¼ 4 generator polynomial g
(x) for the code has an
even number of nonzero
coefficients
CRC-16 gðxÞ ¼ x16 þ x15 þ x2 þ 1 All one-bit errors, two- All burst
and three-bit errors of errors up to
¼ ðx þ x þ 1Þðx þ 1Þ
15
length up to 32767. All length 16
Code length: 32767 error pattern with an odd
Number of parity bits: 16 number of error if the
dmin ¼ 4 generator polynomial g
(x) for the code has an
even number of nonzero
coefficients
CRC-CCITT gðxÞ ¼ x16 þ x12 þ x5 þ 1 All one-bit errors, two- All burst
and three-bit errors of errors up to
¼ ðx þ x þ x þ x þ x
15 14 13 12 4
length up to 32767. length 16
þ x3 þ x2 þ x þ 1Þðx þ 1Þ All error pattern with an
Code length: 32767 odd number of error if the
Number of parity bits: 16 generator polynomial g
dmin ¼ 4 (x) for the code has an
even number of nonzero
coefficients

4.4 BCH Codes

BCH codes are a subclass of cyclic codes. The BCH codes are introduced inde-
pendently by Bose, Ray-Chauduri, and Hocquenghem. For m [ 3 and tec \2m1 ;
there exists a BCH code with parity check bits ðn  kÞ  mtec and dmin  2tec þ 1:

4.4.1 BCH Code Design

If a primitive element a of GFð2m Þ is chosen, then the generator polynomial gðxÞ of


the tec error-correcting binary BCH code of length 2m  1 is the minimum degree
polynomial over GFð2Þ having a; a2 ; a3 ; . . .; a2tc as roots. Suppose /i ðaÞ be the
minimal polynomial of ai ; 1  i  2tec for tec error-correcting binary BCH code.
92 4 Linear Block Codes

Then, the generator polynomial gð xÞ is the least common multiple (LCM) of


/1 ðaÞ; /2 ðaÞ; . . .; /2tec ðaÞ, i.e.,

gð xÞ ¼ LCM /1 ðaÞ; /2 ðaÞ; . . .; /2tec ðaÞ ð4:23Þ

Let cð xÞ ¼ c0 þ c1 ai þ c2 a2i þ    þ c2m 2 að2 2Þi be a code polynomial. Since


m

cð xÞ is divisible by gð xÞ, a root of gð xÞ is also a root of cð xÞ. Hence, for 1  i  2tec ;


 
c ai ¼ c0 þ c1 ai þ c2 a2i þ    þ c2m 2 að2 2Þi ¼ 0
m
ð4:24Þ

Equation (4.24) can be rewritten in matrix form as


0 1
1
B ai C
B C
ðc0 ; c1 ; . . .; c2m 2 Þ  B
B
a2i C¼0
C ð4:25Þ
@ .. A
.
að 2 2Þi
m

It follows that c  H T ¼ 0 for every code word c ¼ ðc0 ; c1 ; . . .; c2m 2 Þ in the tec
error-correcting BCH code of length 2m  1 generated by gðxÞ: Hence, for gðxÞ, the
corresponding 2t  ð2m  1Þ matrix over GF(2m) can be formed as
2 3
að2 2 Þ
m
1 a a2 ...
2 2 2
m
61 a2 ð a2 Þ
2
... ða Þ 7
6 7
6 a3 ð a3 Þ
2
... ð a3 Þ
2m 2 7
H ¼ 61 7 ð4:26Þ
6. .. .. .. .. 7
4 .. . . . . 5
2 2m 2
1 a2i ða2t Þ . . . ða2t Þ

If /ð xÞ is the minimal polynomial of an element b of the GFð2m Þ and lðl  mÞ is


l
the smallest positive integer such that b2 ¼ b, then from Theorem 3.7, /ð xÞ of
degree m or less is given by

l1

Y
i
/ð xÞ ¼ X  b2 ð4:27Þ
i¼0

i
The conjugates of b are the roots of Ub ð xÞ of the form b2 , 1\i\l  1.
From Theorem 3.6, the roots of Ub ð xÞ having the conjugacy class will have the
same minimal polynomial.
The stepwise procedure to find the minimal polynomial Ub ð xÞ is as follows:
Step 1: Determine the conjugates class of b
Step 2: Obtain Ub ð xÞ using Eq. (4.27)
4.4 BCH Codes 93

The design procedure of tec -error-correcting binary BCH code of length n is as


follows:
1. Choose a primitive root a in a field GFð2m Þ.
2. Select 2tec consecutive powers of a.
3. Obtain the minimal polynomials for all the 2tec consecutive powers of a having
the same minimal polynomial for the roots in the same conjugacy class.
4. Obtain the generator polynomial gðxÞ by taking the LCM of the minimal
polynomials for the 2tec consecutive powers of a:
The construction of BCH codes are illustrated through the following examples.
Example 4.16 Compute a generator polynomial for a binary BCH code of length 15
and minimum distance 3, and find the code rate.
Solution Since 15 is of the form 2m  1, the BCH codes are primitive. Let α be a
primitive element in the field GF(16) generated by the primitive polynomial 1 þ
x þ x4 : The elements of the field GF(16) are given in Table 3.3. Since the code is to
be single error correcting (minimum distance = 3), the generator polynomial thus
must have α and α2 as roots. The a and a2 are conjugate elements and have the same
minimal polynomial, which is 1 þ x þ x4 . The generator polynomial is thus

gð x Þ ¼ 1 þ x þ x 4
Since the degree of gð xÞ is 4, the BCH code generator by gð xÞ is a (15,11) code.
The rate of the code is

k 11
R¼ ¼
n 15

Example 4. 17 Design a double-error-correcting binary BCH code of length 15.


Solution Since 15 is of the form 2m  1, the BCH codes are primitive. Let α be a
primitive element in the field GF(16) generated by the primitive polynomial 1 þ
x þ x4 : The elements of the field GF(16) are given in Table 3.3.
Since the code is to be double error correcting, the generator polynomial thus
must have a; a2 ; a3 ; a4 as roots.
The a; a2 and a4 are conjugates and have the same minimal polynomial, which is
1 þ x þ x4 : Thus,

/a ð xÞ ¼ /a2 ð xÞ ¼ /a4 ð xÞ ¼ 1 þ x þ x4

By letting b ¼ a3
4  16
b2 ¼ a3 ¼ a48 ¼ a45 a3 ¼ 1  a3 ¼ a3
94 4 Linear Block Codes

Therefore, l ¼ 4, and from Eq. (4.28), the minimal polynomial /a3 ð xÞ is given
by

l1

Y Y
41

i i
/a3 ð xÞ ¼ x  b2 ¼ x  b2
i¼0 i¼0
   
¼ x  a x  a x  a12 x  a24
3 6
    
/a3 ð xÞ ¼ x  a3 x  a6 x  a12 x  a9
¼ 1 þ x þ x2 þ x3 þ x4

Hence,
 
gð xÞ ¼ gð xÞ ¼ ð1 þ x þ x4 Þ 1 þ x4 þ x6 þ x7 þ x8
¼ 1 þ x4 þ x6 þ x7 þ x8

Since the degree of gð xÞ is 8, the BCH code generator by gð xÞ is a (15,7) code


with minimum distance 5.
Example 4.18 Design a triple-error-correcting binary BCH code of length 63.
Solution Let α be a primitive element in the field GF(16) generated by the primitive
polynomial 1 þ x þ x4 . The elements of the field GF(16) are given in Table 3.3.
Since the code is to be triple error correcting, the generator polynomial thus must
have a; a2 ; a3 ; a4 ; a5 ; a6 as roots. a; a2 and a4 are conjugate elements and have the
same minimal polynomial, which is 1 þ x þ x6 :
Thus,

/a ð xÞ ¼ /a2 ð xÞ ¼ /a4 ð xÞ ¼ 1 þ x þ x6

The elements a3 and a6 are conjugates and have the same minimal polynomial.
By letting b ¼ a3
6  64
b2 ¼ a3 ¼ a192 ¼ a63 a63 a63 a3 ¼ 1  a3 ¼ a3

Therefore, l ¼ 6, and from Eq. (4.28), the minimal polynomials /a3 ð xÞ and
/a6 ð xÞ are the same and are given by

l1

Y 61

Y
i i
/a3 ð xÞ ¼ /a6 ð xÞ ¼ x  b2 ¼ x  b2
i¼0 i¼0
      
¼ x  a3 x  a6 x  a12 x  a24 x  a48 x  a96
      
¼ x  a3 x  a6 x  a12 x  a24 x  a48 x  a33
¼ 1 þ x þ x2 þ x4 þ x6
4.4 BCH Codes 95

By letting b ¼ a5
6  64
b2 ¼ a5 ¼ a320 ¼ a63 a63 a63 a63 a63 a3 ¼ 1  a5 ¼ a5

Therefore, l ¼ 6, and from Eq. (4.28), the minimal polynomial /a3 ð xÞ is given
by

l1

Y Y
61

i i
/a5 ð xÞ ¼ x  b2 ¼ x  b2
i¼0 i¼0
      
¼ x  a5 x  a10 x  a20 x  a40 x  a80 x  a160
      
¼ x  a5 x  a10 x  a20 x  a40 x  a17 x  a34
¼ 1 þ x þ x2 þ x5 þ x6

It follows from Eq. (4.24) that the generator polynomial of the triple-error-
correcting BCH code of length 63 is given by
   
gð x Þ ¼ 1 þ x þ x 6 1 þ x þ x 2 þ x 4 þ x 6 1 þ x þ x 2 þ x 5 þ x 6
¼ 1 þ x þ x2 þ x3 þ x6 þ x7 þ x9 þ x15 þ x16 þ x17 þ x18

Since the degree of gð xÞ is 18, the BCH code generator by gð xÞ is a (63,45) code
with minimum distance 7.
Example 4.19 Construct generator and parity check matrices for a single-error-
correcting BCH code of length 15.
Solution A parity check matrix for this code is obtained by using Eq. (4.27) as
 
1 a a2 . . . a13 a14

1 a2 a4 . . . a11 a13

This parity check matrix has redundancy because a and a2 conjugates. Hence,
the parity check matrix without redundancy is

H¼ 1 a a2 ... a13 a14

Note that the entries of H are elements in GF(24). Each element in GF(24) can be
represented by 4 tuples over GF(2). If each entry of H is replaced by its corre-
sponding 4 tuples over GF(2) arranged in column form, we obtain a binary parity
check matrix for the code as follows:
96 4 Linear Block Codes

2 3
1 0 0 0 1 0 0 1 1 0 1 0 1 1 1
60 1 0 0 1 1 0 1 0 1 1 1 1 0 07
H¼6
40
7
0 1 0 0 1 1 0 1 0 1 1 1 1 05
0 0 0 1 0 0 1 1 0 1 0 1 1 1 1

The corresponding generator matrix is


2 3
1 1 0 0 1 0 0 0 0 0 0 0 0 0 0
60 1 1 0 0 1 0 0 0 0 0 0 0 0 07
6 7
60 0 1 1 0 0 1 0 0 0 0 0 0 0 07
6 7
61 1 0 1 0 0 0 1 0 0 0 0 0 0 07
6 7
61 0 1 0 0 0 0 0 1 0 0 0 0 0 07
6 7
G¼6
60 1 0 1 0 0 0 0 0 1 0 0 0 0 077
61 1 1 0 0 0 0 0 0 0 1 0 0 0 07
6 7
60 1 1 1 0 0 0 0 0 0 0 1 0 0 07
6 7
61 1 1 1 0 0 0 0 0 0 0 0 1 0 07
6 7
41 0 1 1 0 0 0 0 0 0 0 0 0 1 05
1 0 0 1 0 0 0 0 0 0 0 0 0 0 1

4.4.2 Berlekamp’s Algorithm for Binary BCH Codes


Decoding

Let code polynomial cð xÞ ¼ cn1 xn1 þ cn2 xn2 þ    þ c1 x þ c0 , an error poly-


nomial eð xÞ ¼ en1 xn1 þ en2 xn2 þ    þ e1 x þ e0 , and received polynomial
r ð xÞ ¼ rn1 xn1 þ rn2 xn2 þ    þ r1 x þ r0 .
Then, r ð xÞ can be written as

r ð xÞ ¼ cð xÞ þ eð xÞ ð4:28Þ

Let S ¼ ½S1 S2 . . .S2tec  be syndrome sequence with 2tec known syndrome com-
ponents. Then, the syndrome polynomial can be written as

Sð xÞ ¼ S2tec x2tec þ S2tec 1 x2tec 1 þ    þ S1 x ð4:29Þ

where tec stands for error-correcting capability. By evaluating the received poly-
nomial at 2tec zeros, the syndromes S1 S2 . . .S2tec can be obtained. Thus,
   n1  n2
Si ¼ r ai ¼ rn1 ai þrn2 ai
 
þ    þ r1 ai þ r0 for 1  i  2tec ð4:30Þ
4.4 BCH Codes 97

The syndrome sequence S1 S2 . . .S2tec can be rewritten as


   n1  n2
Si ¼ e ai ¼ en1 ai þen2 ai
  ð4:31Þ
þ    þ e1 ai þ e0 for 1  i  2tec

Assuming that the received word r has v errors in positions j1 ; j2 ; . . .; jv , the


error locator polynomial can be expressed as

Kð xÞ ¼ Kv xv þ Kv1 xv1 þ    þ K1 x þ 1
    
¼ 1  aj1 x 1  aj2 x . . . 1  ajv x ð4:32Þ

The error magnitude polynomial is defined as

Xð xÞ ¼ Kð xÞð1 þ Sð xÞÞ mod x2tec þ1 ð4:33Þ

This is useful in non-binary decoding.


Berlekamp’s algorithm proceeds for binary decoding of BCH codes iteratively
by breaking down into a series of smaller problems of the form
 
½1 þ Sð xÞKð2nÞ ð xÞ ¼ 1 þ X2 x2 þ X4 x4 þ    þ X2n x2n mod x2nþ1 ð4:34Þ

where n runs from 1 to tec . The flowchart of the Berlekamp’s iterative algorithm is
shown in Fig. 4.6.

4.4.3 Chien Search Algorithm

A Chien search is shown in Fig. 4.7. The Chien search is a systematic means of
evaluating the error locator polynomial at all elements in a field GFð2m Þ. Each
coefficient of the error locator polynomial is repeatedly multiplied by ai , where a is
primitive in GFð2m Þ. Each set of products is then summed to obtain Ai ¼ Kðai Þ  1.
If ai is a root of Kð xÞ, then Ai ¼ Kðai Þ  1 and an error is indicated at the coor-
dinate associated with ai ¼ ani :
Example 4.20 Let the transmission code be the triple-error-correcting binary BCH
code of length 15. The generator polynomial is gð xÞ ¼ 1 þ x þ x2 þ x4 þ x5 þ
x8 þ x10 . Use Berlekamp’s algorithm to decode the following received vector
r ¼ ð000101000000100Þ.
98 4 Linear Block Codes

Fig. 4.6 Berlekamp iterative algorithm for decoding binary BCH codes

Solution For double error correction, the generator polynomial gð xÞ ¼ 1 þ x þ


x2 þ x4 þ x5 þ x8 þ x10  has roots which include six consecutive powers of
a: a; a2 ; a3 ; a4 ; a5 ; a6 , where a is primitive in GF(16).
The received vector is r ¼ ð000101000000100Þ $ r ð xÞ ¼ x3 þ x5 þ x12
The syndrome polynomial is written as
4.4 BCH Codes 99

Fig. 4.7 Chien search circuit

Sð x Þ ¼ S1 x þ S2 x 2 þ S3 x 3 þ S4 x 4 þ S5 x 5 þ S6 x 6
S1 ¼ r ðaÞ ¼ a3 þ a5 þ a12 ¼ 1
 
S2 ¼ r a2 ¼ a6 þ a10 þ a24 ¼ 1
 
S3 ¼ r a3 ¼ a9 þ a15 þ a36 ¼ a10
 
S4 ¼ r a4 ¼ a12 þ a20 þ a48 ¼ 1
 
S5 ¼ r a5 ¼ a15 þ a25 þ a60 ¼ a10
 
S6 ¼ r a6 ¼ a18 þ a30 þ a72 ¼ a5

Since a15 ¼ 1;

a24 ¼ a15 a9 ¼ a9 ;
a36 ¼ a15 a15 a6 ¼ a6 ;
a20 ¼ a15 a5 ¼ a5 ;
a48 ¼ a15 a15 a15 a3 ¼ a3
a25 ¼ a15 a10 ¼ a10 ;
a60 ¼ a15 a15 a15 a15 ¼ 1;
a30 ¼ a15 a15 ¼ 1;
a72 ¼ a15 a15 a15 a15 a12 ¼ a12
100 4 Linear Block Codes

Sð xÞ ¼ x þ x2 þ a10 x3 þ x4 þ a10 x5 þ a5 x6

Applying Berlekamp algorithm, we obtain the following

n Kð xÞð2nÞ Bð xÞð2nÞ Dð2nÞ


0 1 1 1
1 1þx x a5
2 1 þ x þ a5 x2 xð1 þ xÞ=a5 a10
3 1þxþa x 5 3 ... ...

The error locator polynomial is then


   
Kð xÞ ¼ 1 þ x þ a5 x3 ¼ 1 þ a3 x 1 þ a5 x 1 þ a12 x

indicating errors at the positions corresponding to a3 ; a5 and a12 . The corrected


received word with the corrected positions is then

c¼ ð000000000000000Þ
l
cð xÞ ¼ 0

Example 4.21 Let the transmission code be the double-error-correcting, narrow-


sense, binary BCH code of length 15. The generator polynomial is
gð xÞ ¼ 1 þ x4 þ x6 þ x7 þ x8 . Use Berlekamp’s algorithm to decode the following
received vector r ¼ ð000110001100000Þ.
Solution For double error correction, the generator polynomial gð xÞ ¼ 1 þ x4 þ
x6 þ x7 þ x8 has roots which include four consecutive powers of a: ða; a2 ; a3 ; a4 Þ,
where a is primitive in GF(16). The received vector is

r ¼ ð000110001100000Þ
l
r ð xÞ ¼ x3 þ x4 þ x8 þ x9 :

The syndrome polynomial is written as

Sð x Þ ¼ S1 x þ S2 x 2 þ S3 x 3 þ S4 x 4
S1 ¼ r ð aÞ ¼ a3 þ a4 þ a8 þ a9 ¼ a2
 
S2 ¼ r a2 ¼ a6 þ a8 þ a16 þ a18 ¼ a4
 
S3 ¼ r a3 ¼ a9 þ a12 þ a24 þ a27 ¼ 0
 
S4 ¼ r a4 ¼ a12 þ a16 þ a32 þ a36 ¼ a8
4.4 BCH Codes 101

Since a15 ¼ 1;

a16 ¼ a15 a1 ¼ a1 ;
a18 ¼ a15 a3 ¼ a3 ;
a24 ¼ a15 a9 ¼ a9
a28 ¼ a15 a12 ¼ a12 ;
a32 ¼ a15 a15 a2 ¼ a2 ;
a36 ¼ a15 a15 a6 ¼ a6

Sð x Þ ¼ a2 x þ a4 x 2 þ a8 x 4

Applying Berlekamp algorithm, we obtain the following

n Kð xÞð2nÞ Bð xÞð2nÞ Dð2nÞ


0 1 1 a2
1 1þa x 2
a x
13
a6
2 1þa xþa x
2 19 2 ... ...

The error locator polynomial is then


  
Kð xÞ ¼ 1 þ a2 x þ a19 x2 ¼ 1 þ a7 x 1 þ a12 x

indicating errors at the positions corresponding to a7 and a12 . The corrected


received word with the corrected positions is then

c ¼ ð000110011100100Þ
l
cð xÞ ¼ x3 þ x4 þ x7 þ x8 þ x9 þ x12

4.5 Reed–Solomon Codes

The RS codes are the most powerful non-binary block codes which have seen
widespread applications. These codes work with symbols that consist of several
bits. A common symbol size for non-binary codes is 8 bits or a byte. The RS codes
are good at correcting burst errors because the correction of these codes is done on
the symbol level.
A given Reed–Solomon code is indicated by referring to it as an ðn; kÞ code.
The parameter n indicates the code word length in terms of the number of symbols
in the code word. The parameter k indicates the number of message symbols in the
102 4 Linear Block Codes

code word. The number of parity symbols added is thus ðn; kÞ. The error-correcting
capability of the code is tec ¼ ðn  kÞ=2. The minimum distance of Reed–Solomon
code is ðn  k þ 1Þ:

4.5.1 Reed–Solomon Encoder

Generator Polynomial
A general form of the polynomial gðxÞ used in RS code generation is
    
gð xÞ ¼ x  ai x  aiþ1 . . . x  aiþ2tec ð4:35Þ

where a is a primitive element of the Galois field.


The code word cð xÞ is constructed using

cð xÞ ¼ gð xÞ  iðxÞ ð4:36Þ

where ið xÞ is the information polynomial.


The code word cðxÞ is exactly divisible by the generator polynomial gð xÞ. The
remainder obtained by dividing ið xÞ  xnk by gð xÞ gives the parity polynomial pðxÞ
as

pð xÞ ¼ ið xÞ  xnk =gð xÞ ð4:37Þ

The parity symbols are computed by performing a polynomial division using GF


algebra. The steps involved in this computation are as follows:
Step 1: Multiply the message symbols by xnk (This shifts the message symbols to
the left to make room for the ðn  kÞ parity symbols).
Step 2: Divide the message polynomial by the code generator polynomial using
GF algebra.
Step 3: The parity symbols are the remainder of this division. These steps are
accomplished in hardware using a shift register with feedback. The
architecture for the encoder is shown in Fig. 4.8.
gðxÞ is the generator polynomial used to generate parity symbols pðxÞ: The number
of registers used is equal to n  k. Parity symbols are generated by serial entry of
the information symbols into iðxÞ:
The resultant code word is given by

cð xÞ ¼ ið xÞ  xnk þ pðxÞ ð4:38Þ


4.5 Reed–Solomon Codes 103

Fig. 4.8 Reed–Solomon encoder

Example 4.22 Construct a generator polynomial for a (15,11) Reed–Solomon code


with elements in GF(24).
Solution A (15,11) Reed–Solomon code has minimum distance 5. Thus, the
(15,11) Reed–Solomon code is double error corrections. It must have 4 consecutive
powers of α as zeros.

The generator polynomial is constructed as follows using the representation for GF


(16) over GF(2).
   
g ð x Þ ¼ ð x  a Þ x  a2 x  a 3 x  a 4
      
¼ x 2 þ a2 þ a x þ a3 x 2 þ a3 þ a4 x þ a7
  
¼ x 2 þ a5 x þ a3 x 2 þ a7 x þ a7
       
¼ x4 þ a5 þ a7 x3 þ a3 þ a12 þ a7 x2 þ a10 þ a12 x þ a10
 
¼ x4 þ a13 x3 þ a6 x2 þ a3 x þ a10

Example 4.23 Compute a generator polynomial for a double-error-correcting


Reed–Solomon code of length 31.
Solution Let α be a root of the primitive binary polynomial x5 þ x2 þ 1 and thus a
primitive 31st of unity. The resulting code is to be a double-error-correcting code; it
must have 4 consecutive powers of α as zeros. A narrow-sense generator is con-
structed as follows using the representation for GF(32).
104 4 Linear Block Codes

   
gð xÞ ¼ ðx  aÞ x  a2 x  a3 x  a4
     
¼ x 2 þ a2 þ a x þ a3 x  a3 x  a4
   
¼ x2 þ a19 x þ a3 x  a3 x  a4
  
¼ x3 þ a19 x2 þ a3 x2 þ a3 x þ a22 x þ a6 x  a4
      
¼ x3 þ a19 þ a3 x2 þ a3 þ a22 x þ a6 x  a4
  
¼ x3 þ a12 x2 þ a14 x þ a6 x  a4
¼ x4 þ a12 x3 þ a4 x3 þ a14 x2 þ a16 x2 þ a6 x þ a18 x þ a10
     
¼ x4 þ a12 þ a4 x3 þ a14 þ a16 x2 þ a6 þ a18 x þ a10
¼ x4 þ a24 x3 þ a19 x2 þ a29 x þ a10

Example 4.24 Compute a generator polynomial for a triple-error-correcting Reed–


Solomon code of length 15.
Solution
     
gð x Þ ¼ ð x þ aÞ x þ a2 x þ a3 x þ a4 x þ a5 x þ a6
      
¼ x 2 þ a2 þ a x þ a3 x 2 þ a3 þ a4 x þ a7
 2  6  
x þ a þ a5 x þ a11
   
¼ x2 þ a5 x þ a3 x2 þ a7 x þ a7 x2 þ a9 x þ a11
       
¼ x4 þ a5 þ a7 x3 þ a3 þ a12 þ a7 x2 þ a10 þ a12 x þ a10
 2 
x þ a9 x þ a11
  
¼ x4 þ a13 x3 þ a6 x2 þ a3 x þ a10 x2 þ a9 x þ a11
      
¼ x6 þ a9 þ a13 x5 þ a11 þ a22 þ a6 x4 þ a24 þ a15 þ a3 x3
    
þ a10 þ a17 þ a12 x2 þ a14 þ a19 x þ a21
 
¼ x6 þ a10 x5 þ a14 x4 þ a4 x3 þ a6 x2 þ a9 x þ a6

Basic Properties of Reed–Solomon Codes

1. Non-binary BCH codes are referred to as Reed–Solomon codes.


2. The minimum distance of Reed–Solomon code is ðn  k þ 1Þ:
3. RS codes are maximum distance separable (MDS). The singleton bound implies
that dmin  ðn  k þ 1Þ. RS ðn; kÞ code is called MDS if the singleton bound is
satisfied with equality.
4. The weight distribution polynomial of RS code is known. The weight distri-
bution of an RS code with symbols from GF(q) and with block length n ¼ q  1
and minimum distance dmin is given by
4.5 Reed–Solomon Codes 105

  1d
X  
n min
i1
Wi ¼ n ð1Þ j ðn þ 1Þijdmin dmin  i  n ð4:39Þ
i j¼0
j

4.5.2 Decoding of Reed–Solomon Codes

The locations of the errors can be found from the error locator polynomial Kð xÞ.
Once the locations of the errors are known, the magnitudes of the errors are found
by the Forney’s algorithm given by [1]
 
xk X x1
ek ¼  k
 ð4:40Þ
K0 x1
k

where ek represents the error magnitude at the kth location and K0 ðxk Þ stands for
formal derivative of the error locator polynomial Kð xÞ. If locator polynomial
Kð xÞ ¼ Kv xv þ Kv1 xv1 þ    þ K1 x þ 1 is a polynomial with coefficients in
GFðqÞ, the formal derivative K0 ð xÞ is defined as

K0 ð xÞ ¼ vKv xv1 þ ðv  1ÞKv1 xv3 þ    þ K1 ð4:41Þ

The decoding of a RS code has to go through the following six steps:


Step 1: Compute Syndromes from the received polynomial rðxÞ
Step 2: Apply Berlekamp–Massey algorithm to compute error location polynomial
K ð xÞ
Step 3: Compute error magnitude polynomial

Xð xÞ ¼ Kð xÞð1 þ Sð xÞÞ mod x2tec þ1

Step 4: Find the roots of Kð xÞ, the inverse of the roots indicates the locations of the
errors
Step 5: Compute the error magnitudes and determine the error polynomial eðxÞ
Step 6: Subtract eðxÞ from the received polynomial to correct the errors.
Syndrome generation is similar to parity calculation. A Reed–Solomon code
word has 2tec syndromes that depend only on errors (not on the transmitted code
word).
The syndrome sequence can be computed for the received word polynomial rðxÞ
by substituting the 2tec roots of the generator polynomial gðxÞ into rðxÞ. The
Berlekamp–Massey algorithm or Euclid’s algorithm can be used to find error
locator polynomial. The Euclid’s algorithm is widely used in practice as it is easy
for implementation. However, hardware and software implementations of the
106 4 Linear Block Codes

Berlekamp–Massey algorithm are more efficient [2, 3]. Once the error locator
polynomial is known, the error locations can be found by using the Chien search
algorithm [4].
The Berlekamp–Massey Decoding Algorithm
The problem of decoding RS codes can be viewed as finding a linear feedback shift
register (LFSR) of minimal length so that the first 2tec elements in the LFSR output
sequence are the syndromes S1 S2 . . .S2tec : The error locator polynomial Kð xÞ is
provided by the taps of the LFSR.
The flowchart of the Berlekamp–Massey iterative algorithm is shown in Fig. 4.9.
Here, KðnÞ ð xÞ is the error location polynomial at the nth iteration step, Bð xÞ stands
for the connection polynomial, Ln represents the length of LFSR at indexn, and dn
is the discrepancy. Consider the error location polynomial KðnÞ ð xÞ of length n. The
coefficients of the polynomial specify the taps of a length n LFSR. The Berlekemp–
Massey algorithm initially (i.e., n ¼ 0) sets the tap coefficient and the length of the
LFSR to 1 and 0, respectively, to indicate that the computed error locator poly-
nomial Kð0Þ ð xÞ, and its length is set to 1 and 0, respectively, and also sets Bð xÞ ¼ x
at every iteration, or a new syndrome component, and the discrepancy dn is com-
puted by subtracting the nth output of the LFSR defined by Kðn1Þ ð xÞ from the nth
syndrome. If the discrepancy is not equal to zero, a modified error locator poly-
nomial is constructed using discrepancy and connection polynomial BðxÞ. Then, the
length of the LFSR is to be tested. If 2Ln is greater than or equal to n, the length of
the LFSR and connection polynomial BðxÞ are to be updated. Otherwise, if 2Ln is
less than n, the connection polynomial BðxÞ is to be reset as xBðxÞ.
If the discrepancy is equal to zero, then the connection polynomial BðxÞ is to be
reset as xBðxÞ and the previous error locator polynomials are used for the next
iteration. The process is continued, and the algorithm stops at the end of the
iteration n ¼ 2tec and Kð2tec Þ ð xÞ is taken as the error locator polynomial Kð xÞ.
Example 4.25 Let the transmission code be the double-error-correcting RS code of
length 7. Use
 the Berlekamp–Massey
 algorithm to decode the following received
vector r ¼ 00a 1a 0a .
5 2 2

Solution
Step 1: The received polynomial is
 
r ð x Þ ¼ a5 x 2 þ x 3 þ a2 x 4 þ a2 x 6 ; i.e, r ¼ 00a5 1a2 0a2

For double-error-correcting code, the syndrome polynomial is

Sð x Þ ¼ S1 x þ S2 x 2 þ S3 x 3 þ S4 x 4
4.5 Reed–Solomon Codes 107

Fig. 4.9 Berlekamp–Massey iterative algorithm


108 4 Linear Block Codes

The syndromes S1 ; S2 ; S3 and S4 for the above-mentioned received polynomial are


computed using the representation for GF(8) as

S1 ¼ r ðaÞ ¼ a6
 
S2 ¼ r a2 ¼ a3
 
S3 ¼ r a3 ¼ a4
 
S4 ¼ r a4 ¼ a3

Thus, the

Sð x Þ ¼ a6 x þ a3 x 2 þ a4 x 3 þ a3 x 4 :

Step 2: Berlekamp–Massey algorithm proceeds as follows:

n Sn KðnÞ ðxÞ dn Ln BðxÞ


0 ... 1 ... 0 x
1 a 6
1þa x 6
S1  0 ¼ a 6 1 ax
 
2 a3 1 þ a6 þ a3 x S2  a6 a6 1 ax2
¼ 1 þ a4 x ¼ S2  a5 ¼ a2


3 a4 1 þ a4 x þ a5 ax2 S3  a4 a3 2 1þa4 x

 2a
5
¼ S3  1 ¼ a5 
¼ 1 þ a4 x þ a6 x2 a x þ a6 x2
4 a3 1 þ a2 x þ ax2 S4  ða4 a4 þ a6 a3 Þ ... ...
¼ S4  ða þ a2 Þ
¼ S4  a4 ¼ a6

The error locator polynomial is then

Kð xÞ ¼ 1 þ a2 x þ ax2

Step 3: The error magnitude polynomial is

Xð xÞ ¼ Kð xÞð1 þ Sð xÞÞ mod x2tec þ1


  
¼ 1 þ a2 x þ ax2 1 þ a6 x þ a3 x2 þ a4 x3 þ a3 x4 mod x5
¼ 1 þ x þ a3 x 2

Step 4:
  
Kð xÞ ¼ 1 þ a2 x þ ax2 ¼ 1 þ a3 x 1 þ a5 x ¼ 0

The factorization of the error locator polynomial indicates that there are errors in the
third and fifth positions of the received vector.
4.5 Reed–Solomon Codes 109

Hence, the error polynomial eðxÞ is

eð xÞ ¼ e3 x3 þ e5 x5

Step 5: From the error locator polynomial, it is known that error positions are in
locations 3 and 5. Now, the error magnitudes can be computed by using
error evaluator polynomial Xð xÞ and derivative of the error locator poly-
nomial Kð xÞ. The error magnitudes are given by
 
xk X x1
ek ¼  k

K0 x1
k

The magnitudes of errors are found to be


 
x3 X x1
e3 ¼  3
K0 x1
3
 
Since K0 x1
3 ¼ a2
 
x3 1 þ x1 3 2
3 þ a x3
e3 ¼
a2

where x3 ¼ a3
Thus,

ð a3 þ 1 þ 1Þ
e3 ¼ ¼a
a2

Similarly,
 
x5 1 þ x1 3 2
5 þ a x5
e5 ¼
a2

where x5 ¼ a5
Hence,
 
a5 þ 1 þ a2
e5 ¼ ¼ a5
a2

Thus, the error pattern

eð xÞ ¼ ax3 þ a5 x5
110 4 Linear Block Codes

Step 6:

cð xÞ ¼ r ð xÞ  eð xÞ ¼ a5 x2 þ x3 þ a2 x4 þ a2 x6 þ ax3 þ a5 x5

Example 4.26 Consider a triple-error-correcting RS code of length 15. Decode the


received vector r ¼ ð000a7 00a3 00000a4 00Þ using Berlekamp–Massey algorithm.
Solution
Step 1: The received polynomial is
 
r ð xÞ ¼ a7 x3 þ a3 x6 þ a4 x12 ; i.e, r ¼ 000a7 00a3 00000a4 00

The following syndromes are computed using the representation of GF(16) over GF
(2). For triple error correction, the roots of the generator polynomial include
a; a2 ; a3 ; a4 ; a5 ; a6 .
Thus,

S1 ¼ r ðaÞ ¼ a10 þ a9 þ a ¼ a12


 
S2 ¼ r a2 ¼ a13 þ 1 þ a13 ¼ 1
 
S3 ¼ r a3 ¼ a þ a6 þ a10 ¼ a14
  :
S4 ¼ r a4 ¼ a4 þ a12 þ a7 ¼ a10
 
S5 ¼ r a5 ¼ a7 þ a3 þ a4 ¼ 0
 
S6 ¼ r a6 ¼ a10 þ a9 þ a ¼ a12

Sð xÞ ¼ a12 x þ x2 þ a14 x3 þ a10 x4 þ a12 x6

Step 2: Berlekamp–Massey algorithm proceeds as follows:

n Sn KðnÞ ðxÞ dn Ln BðxÞ


0 ... 1 ... 0 x
1 a12 1 þ a2 x S1  0 ¼ a2 1 a3 x
2 1 1þa x 3
S2  a ¼ a
9 7 1 a3 x2
3 a14 1 þ a3 x þ a3 x2 S3  a3 ¼ 1 2 x þ a3 x2
4 a 10
1þa xþa x
4 12 2
S4  a ¼ a
6 7 2 x2 þ a3 x3
5 0 1þa xþa x þa x
4 3 2 13 3
S5  a 10
¼a 10 3 a5 x þ a9 x2 þ a2 x3
6 a 12
1þa xþa x þa x
7 4 2 6 3
S6  a ¼ a 13 ... ...
4.5 Reed–Solomon Codes 111

The error locator polynomial is then


   
Kð xÞ ¼ 1 þ a7 x þ a4 x2 þ a6 x3 ¼ 1 þ a3 x 1 þ a6 x 1 þ a12 x

Step 3: The error magnitude polynomial is


 
Xð xÞ ¼ Kð xÞð1 þ Sð xÞÞ ¼ 1 þ a7 x þ a4 x2 þ a6 x3 1 þ a12 x þ x2

þ a14 x3 þ a10 x4 þ a12 x6 mod x7
 
¼ 1 þ a2 x þ x2 þ a6 x3 þ x7 þ ax8 þ a3 x9 mod x7
 
¼ 1 þ a2 x þ x 2 þ a6 x 3

Step 4:
   
Kð xÞ ¼ 1 þ a7 x þ a4 x2 þ a6 x3 ¼ 1 þ a3 x 1 þ a6 x 1 þ a12 x ¼ 0

The factorization of the error locator polynomial indicates that there are errors in the
positions 3, 6, and 12 of the received vector.
Hence, the error polynomial eðxÞ is

eð xÞ ¼ e12 x12 þ e6 x6 þ e3 x3

Step 5: From the error locator polynomial, it is known that error positions are at
locations 3, 6, and 12. Now, the error magnitudes can be computed by
using error evaluator polynomial Xð xÞ and derivative of the error locator
polynomial Kð xÞ. The error magnitudes are given by
 
xk X x1
ek ¼  k
K0 x1
k

The magnitudes of errors are found to be


 
x3 X x1
e3 ¼  3

K0 x1
3
 
Since K0 x1
3 ¼ a7 þ a6 x2
3
 
x3 1 þ a2 x1 2 6 3
3 þ x3 þ a x3
e3 ¼
a7 þ a6 x2
3

where x3 ¼ a3 .
112 4 Linear Block Codes

Thus,

a3 ð1 þ a2  a12 þ a9 þ a12 Þ a3 ð1 þ a14 þ a9 þ a12 Þ a3  a13


e3 ¼ ¼ ¼ ¼ a7
1 þ a7 1 þ a7 a9

Similarly,

e 6 ¼ a3 ; e12 ¼ a4 :

Thus, the error pattern

eð xÞ ¼ a7 x3 þ a3 x6 þ a4 x12

Step 6: The corrected received word is cð xÞ ¼ r ð xÞ  eð xÞ ¼ 0

c ¼ ð000000000000000Þ

Example 4.27 Let the transmission code be the triple-error-correcting RS code of


length 31. Decode the received vector r ¼ ð00a8 00a2 0000a000000000000000000
00Þ using Berlekamp–Massey algorithm.
Solution
Step 1: The received polynomial is

r ð xÞ ¼ a8 x2 þ a2 x5 þ ax10 ;
 
i:e, r ¼ 00a8 00a2 0000a00000000000000000000

The following syndromes are computed using the representation of GF(16) over GF
(2). For triple error correction, the roots of the generator polynomial include
a; a2 ; a3 ; a4 ; a5 ; a6 .
Thus,

S1 ¼ r ðaÞ ¼ a10 þ a9 þ a11 ¼ a


 
S2 ¼ r a2 ¼ a12 þ a12 þ a21 ¼ a21
 
S3 ¼ r a3 ¼ a14 þ a17 þ a31 ¼ a23
 
S4 ¼ r a4 ¼ a16 þ a22 þ a20 ¼ a15
 
S5 ¼ r a5 ¼ a18 þ a27 þ a20 ¼ a2
 
S6 ¼ r a6 ¼ a20 þ a þ a30 ¼ a13

Sð xÞ ¼ ax þ a21 x2 þ a23 x3 þ a15 x4 þ a2 x5 þ a13 x6 :


4.5 Reed–Solomon Codes 113

Step 2: Berlekamp–Massey algorithm proceeds as follows:

n Sn KðnÞ ðxÞ dn Ln BðxÞ

0 . . .. . . 1 ... 0 x
1 a 1 þ ax S1  0 ¼ a 1 a30 x
2 a21 1 þ a20 x S2  a2 ¼ a13 1 a30 x2
3 a23 1 þ a20 x þ a23 x2 S3  a10 ¼ a24 2 a7 x þ a27 x2
4 a 15
1þa xþa x þa xþa x
20 23 2 15 4 2 S4  a12  a13 ¼ a8 2 a7 x2 þ a27 x3
¼ 1 þ a17 x þ a15 x2
5 a2 1 þ a17 x þ a22 x2 þ a26 x3 S5  a  a7 ¼ a30 3 a16 x3 þ a18 x2 þ ax
6 a 13
1þa xþa x þa x þa x
17 22 2 26 3 2 3
S6  a  a  a
19 6 18
¼a 17 ... ...
þa xþa x ¼1þa xþa x þa x
18 4 2 4 5 2 17 3

The error locator polynomial is then


   
Kð xÞ ¼ 1 þ a4 x þ a5 x2 þ a17 x3 ¼ 1 þ a21 x 1 þ a26 x 1 þ a29 x

Step 3: The error magnitude polynomial is


 
Xð xÞ ¼ Kð xÞð1 þ Sð xÞÞ ¼ 1 þ a4 x þ a5 x2 þ a17 x3 1 þ ax þ a21 x2 þ a23 x3

þ a15 x4 þ a2 x5 þ a13 x6 mod x7
 
¼ 1 þ a30 x þ a21 x2 þ a23 x3

Step 4:
   
Kð xÞ ¼ 1 þ a4 x þ a5 x2 þ a17 x3 ¼ 1 þ a2 x 1 þ a5 x 1 þ a10 x ¼ 0

The factorization of the error locator polynomial indicates that there are errors in the
second, fifth, and tenth positions of the received vector.
Hence, the error polynomial eðxÞ is

eð xÞ ¼ e10 x10 þ e5 x5 þ e2 x2

Step 5: From the error locator polynomial, it is known that error positions are at
locations 2, 5, and 10. Now, the error magnitudes can be computed by
using error evaluator polynomial Xð xÞ; and derivative of the error locator
polynomial Kð xÞ. The error magnitudes are given by
 
xk X x1
ek ¼  k
K0 x1
k
114 4 Linear Block Codes

The magnitudes of errors are found to be


 
x2 X x1
e2 ¼  2
K0 x1
2
 
Since K0 x1
2 ¼ a4 þ a17 x2
2
 
x2 1 þ a2 x1 2 6 3
2 þ x2 þ a x2
e2 ¼
a4 þ a17 x2
2

where x3 ¼ a3
Thus,
 
a2 1 þ a30  a2 þ a21  a4 þ a23  a6 a2 þ a30 a28
e2 ¼ ¼ ¼ 20 ¼ a8
a4 þ a13 a20 a

Similarly,

e5 ¼ a2 ; e10 ¼ a:

Thus, the error pattern

eð xÞ ¼ a8 x2 þ a7 x5 þ ax10

Step 6: The corrected received word is cð xÞ ¼ r ð xÞ  eð xÞ ¼ 0

c ¼ ð000000000000000Þ

4.5.3 Binary Erasure Decoding

For binary linear codes, erasure decoding is done by the following three steps:
Step 1: Replace all erasures with zeros in a received word, and decode it to a code
word c0 .
Step 2: Replace all erasures with ones in a received word, and decode it to a code
word c1 .
Step 3: Choose the final code word either c0 or c1 that is closest to the received
word in the Hamming distance.
4.5 Reed–Solomon Codes 115

4.5.4 Non-binary Erasure Decoding

Suppose that a received word has v errors and f erasures. An erasure locator
polynomial can be written as

Y
f
Cð xÞ ¼ ð 1  Yl x Þ ð4:42Þ
l¼1

where Yl stands for erasure locators. Now, the decoding has to find out error
locations and compute the error magnitudes of the error locators and erasure
magnitudes of the erasure locators. To find the error locator polynomial, a modified
syndrome polynomial is to be formulated and Berlekamp–Massey algorithm is to be
applied on the modified syndrome coefficients.
The modified syndrome polynomial is given by

SM ð xÞ ðCð xÞ½1 þ Sð xÞ  1Þx2tþ1 ð4:43Þ

where the coefficients of the syndrome polynomial Sð xÞ are computed using the
following
 
Sl ¼ r al ð4:44Þ

replacing all the erasures with zeros in the received polynomial rðxÞ:
After finding the error locator polynomial Kð xÞ; obtain error magnitude poly-
nomial and error/erasure locator polynomial as

Xð xÞ ¼ Kð xÞ 1 þ SM ð xÞ x2tþ1 ð4:45Þ

wðxÞ ¼ Kð xÞCð xÞ ð4:46Þ

Then, using the modified Forney’s algorithm, compute the error and erasure
magnitudes as given by
 
Xk X Xk1
ek ¼   ð4:47aÞ
W0 Xk1
 
Yk X Yk1
fk ¼   ð4:47bÞ
W0 Yk1

Knowing the magnitudes of the error locators and erasure locators, an error/
erasure polynomial can be constructed and subtracted from the received polynomial
to arrive at the desired code polynomial.
116 4 Linear Block Codes

The stepwise procedure using Berlekamp–Massey algorithm for error/erasure


decoding is as follows:
Step 1: Formulate the erasure polynomial Cð xÞ using the erasures in the received
vector.
Step 2: Obtain the syndrome polynomial Sð xÞ replacing the erasures with zeros.
Step 3: Compute the modified syndrome polynomial using Eq. (4.43).
Step 4: Apply the Berlekamp–Massey on modified syndrome coefficients to find
the error correction polynomial Kð xÞ.
Step 5: Find the roots of Kð xÞ, to determine the error locations.
Step 6: Compute the error magnitudes using Eq. (4.47a), and determine the error
polynomial eð xÞ.
Step 7: Compute the erasure magnitudes using Eq. (4.47b), and determine the
erasure polynomial f ðxÞ.
Step 8: Subtract eðxÞ and f(x) from the received polynomial to correct the errors.

Example 4.28 Let the transmission code be the double-error-correcting RS code of


length 7. Use the Berlekamp–Massey algorithm to decode the following received
vector r ¼ ð00a3 01f 1Þ.
Solution
Step 1: The received polynomial is r ð xÞ ¼ a3 x2 þ x4 þ fx5 þ x6 ;
The f indicates an erasure. This erasure gives the erasure polynomial

Cð xÞ ¼ 1 þ a5 x

Step 2: Place a zero in the erasure location, and compute the syndromes.
For double-error-correcting code, the syndrome polynomial is

Sð x Þ ¼ S1 x þ S2 x 2 þ S3 x 3 þ S4 x 4
 2  4  6
Sl ¼ a3 al þ al þ al

The syndromes S1 ; S2 ; S3 and S4 for the above-mentioned received polynomial are


computed using the representation for GF(8) as

S1 ¼ r ð aÞ ¼ a5 þ a4 þ a6 ¼ a2
 
S2 ¼ r a2 ¼ a7 þ a8 þ a12 ¼ a2
 
S3 ¼ r a3 ¼ a9 þ a12 þ a18 ¼ a6
 
S4 ¼ r a4 ¼ a11 þ a16 þ a24 ¼ a4
4.5 Reed–Solomon Codes 117

Thus, the

Sð x Þ ¼ a2 x þ a2 x 2 þ a6 x 3 þ x 4 :

Step 3: Compute the modified syndrome polynomial,

1 þ SM ðxÞ Cð xÞ½1 þ SðxÞ mod x2tþ1


  
1 þ a5 x 1 þ a2 x þ a2 x2 þ a6 x3 þ x4 mod x5
1 þ a3 x þ a6 x2 þ a2 x3 þ a5 x4 mod x5

SM ðxÞ is thus a3 x þ a6 x2 þ a2 x3 þ a5 x4 .
Step 4: Berlekamp–Massey algorithm proceeds as follows:

n SM
n KðnÞ ðxÞ dn Ln BðxÞ
0 ... 1 ... 0 x
1 a3 1 þ a3 x a3 1 a4 x
2 a6 1 þ a3 x 0 1 a4 x2
3 a 2
1þa x 3 0 1 a4 x3
4 a 5 1þa x 3 0 ... ...

Step 5: Kð xÞ ¼ 1 þ a3 x, indicating a single error at X1 ¼ a3 .


Step 6: The error magnitude polynomial is
  
Xð xÞ ¼ Kð xÞð1 þ Sð xÞÞ ¼ 1 þ a3 x 1 þ a2 x þ a2 x2 þ a6 x3 þ x4 mod x5
      
¼ 1 þ a2 þ a3 x þ a2 þ a5 x 2 þ a6 þ a5 x 3
  
þ 1 þ a9 x4 þ a3 x5 mod x5
 
¼ 1 þ a5 x þ a3 x2 þ ax3 þ a6 x4 þ a3 x5 mod x5
¼ 1 þ a5 x þ a3 x2 þ ax3 þ a6 x4

The error/erasure locator polynomial

Wð xÞ ¼ Kð xÞCð xÞ
  
¼ 1 þ a3 x 1 þ a5 x
 
¼ 1 þ a2 x þ ax2

The error magnitude


 
Xk X Xk1 Xk 1 þ a5 Xk1 þ a3 Xk2 þ aXk3 þ a6 Xk4
ek ¼   ¼ ¼ a3
W0 Xk1 a2

e 3 ¼ a3
118 4 Linear Block Codes

and erasure magnitude


 
Yk X Yk1
fk ¼  
W0 Yk1

f5 ¼ a

The corrected code word is

cð xÞ ¼ r ð xÞ þ eð xÞ þ f ðxÞ
 
¼ a3 x2 þ x4 þ x6 þ a3 x3 þ ax5

4.6 Performance Analysis of RS Codes

A RS ðn; kÞ code with minimum distance dmin ¼ n  k þ 1 is able to correct tec ¼


ðn  kÞ=2 symbol errors. The bit error probability for RS codes using hard-decision
decoding is often approximated by [5]
 
1 X t
n
Pb
i Pi ð1  PÞni ð4:48Þ
n i¼t þ1 i
ec

4.6.1 BER Performance of RS Codes for BPSK Modulation


in AWGN and Rayleigh Fading Channels

The redundancy introduced by RS code increases the channel symbol transmission


rate, reducing the received NEb0 . For a code with rate R, for BPSK in AWGN channel
and Rayleigh fading channel, Eqs. (2.3) and (2.6) becomes
rffiffiffiffiffiffiffiffiffiffiffiffi
Eb
P¼Q 2R
N0
sffiffiffiffiffiffiffiffiffiffiffiffiffiffi! ð4:49Þ
1 Rc
P¼ 1
2 1 þ Rc

where R ¼ k=n.
4.6 Performance Analysis of RS Codes 119

The following MATLAB program is used to compare the theoretical decoding


error probability of different RS codes with BPSK modulation in AWGN channel.

Program 4.1 Program to compare the decoding error probability of different RS


codes
120 4 Linear Block Codes

0
10
Uncoded
RS (127,106)
-1 RS (31,15)
10
RS (31,21)
RS (31,27)
-2
10
BER

-3
10

-4
10

-5
10

-6
10
0 5 10 15
E /N (dB)
b o

Fig. 4.10 Decoding error probability for RS codes using coherent BPSK over an AWGN channel
4.6 Performance Analysis of RS Codes 121

The decoding error probability obtained from the above program for RS (127,106)
and for RS code of length 31 with different dimensions k is shown in Fig. 4.10.
From Fig. 4.10, it can be observed that the decoder error probability approach
increasingly lowers as the Eb/N0 and code dimension decrease. This can be
attributed to the highly imperfect nature of RS codes.
The following MATLAB program compares the theoretical BER performance of
(127,63) RS code with BPSK modulation in AWGN and Rayleigh fading channels

Program 4.2 Program to compare the decoding error probability of an RS code in


AWGN and Rayleigh fading channels using coherent BPSK modulation

The decoding error probability obtained from the above program for RS
(127,63) is shown in Fig. 4.11.
122 4 Linear Block Codes

0
10
Uncoded Rayleigh
Uncoded AWGN
-1 RS (127,63) AWGN
10
RS (127,63) Rayleigh

-2
10
BER

-3
10

-4
10

-5
10

-6
10
0 5 10 15
E /N (dB)
b o

Fig. 4.11 Decoding error probability for (127,63) RS codes using coherent BPSK over an AWGN
channel and Rayleigh fading channel

From Fig. 4.11, it is seen that the coded AWGN and Rayleigh fading channels
exhibit much better BER performance than the uncoded AWGN and Rayleigh
fading channels.

4.6.2 BER Performance of RS Codes for Non-coherent


BFSK Modulation in AWGN and Rayleigh Fading
Channels

From Eq. (2.25), for BFSK (M = 2), the probability of bit error P for AWGN and
Rayleigh fading channels can be expressed as
 
1 REb
P ¼ exp  AWGN
2 2No

1
P¼ Rayleigh fading ð4:50Þ
2 þ Rc
4.6 Performance Analysis of RS Codes 123

Program 4.3 Program to compare the decoding error probability of an RS code in


AWGN and Rayleigh fading channels using non-coherent BFSK modulation

The decoding error probability obtained from the above program for RS
(127,63) is shown in Fig. 4.12.
From Fig. 4.12, it is seen that the coded AWGN and Rayleigh fading channels
exhibit much better BER performance than the uncoded AWGN and Rayleigh
fading channels. However, the performance is not better as compared to that of
BPSK modulation.
124 4 Linear Block Codes

0
10

-1
10

-2
10
BER

-3
10 Uncoded Rayleigh
Uncoded AWGN
RS (127,63) AWGN
-4
10 RS (127,63) Rayleigh

-5
10

-6
10
0 5 10 15
E /N (dB)
b o

Fig. 4.12 Decoding error probability for (127,63) RS codes using non-coherent BPSK

4.7 Problems

1. Construct encoder circuit using shift register for (15,7) cyclic codegenerated by
gð xÞ ¼ 1 þ x4 þ x6 þ x7 þ x8 y; and find the code word corresponding to the
information sequence (1001011).
2. Construct a shift register decoder for the (15,11) cyclic Hamming code gen-
erated by gð xÞ ¼ 1 þ x þ x4 , and decode the received word r ¼ ð1111000001
00100Þ:
3. Design a four-error-correcting binary BCH code of length 15.
4. Let the transmission code be the triple-error-correcting binary BCH code of
length 31. The generator polynomial is gð xÞ ¼ 1 þ x þ x2 þ x3 þ x5 þ x7 þ
x8 þ x9 þ x10 þ x11 þ x15 . Use Berlekamp’s algorithm to decode the following
received vector r ¼ ð0100000000001000000000000100000Þ:
5. Let the transmission code be the double-error-correcting binary BCH code of
length 15. The generator polynomial is gð xÞ ¼ 1 þ x4 þ x6 þ x7 þ x8 . Use
Berlekamp’s algorithm to decode the following received vector r ¼ ð00f 00000
0000000Þ. The f indicates erasure.
4.7 Problems 125

6. Construct a generator polynomial for a double-error-correcting Reed–Solomon


code of length 7, and determine the number of code words it does have.
7. Determine the weight distribution for the RS code of problem 1.
8. Compute a generator polynomial for a triple-error-correcting Reed–Solomon
code of length 31.
9. Construct a generator polynomial for a (63,57) RS code, and determine the
code words it does have.
10. Let the transmission code be the double-error-correcting RS code of length 7.
Use the Berlekamp–Massey algorithm to decode the following received vector
r ¼ ð00010a0Þ.
11. Let the transmission code be the double-error-correcting RS code of length 7.
Use the Berlekamp–Massey algorithm to decode the following received vector
r ¼ ð1010000Þ.
12. Let the transmission code be the double-error-correcting RS code of length 7.
Use the Berlekamp–Massey algorithm to decode the following received vector
r ¼ ða3 a1aa2 00Þ.
13. Let the transmission code be the triple-error-correcting RS code of length 15.
Decode the received vector r ¼ ð000a7 000000a11 0000Þ using Berlekamp–
Massey algorithm.
14. Let the transmission code be the double-error-correcting RS code of length 15.
Use the Berlekamp–Massey algorithm to decode the following received vector
r ¼ ð100100000000000Þ.
15. Let the transmission code be the triple-error-correcting RS code of length 31.
Decode the following received vector r ¼ ða2 00000000000a21 0000000a7
000000000Þ using the Berlekamp–Massey algorithm.
16. Let the transmission code be the double-error-correcting RS code of length 7.
Use the Berlekamp–Massey algorithm to decode the following received vector
r ¼ ð00a3 f 101Þ.

4.8 MATLAB Exercises

1. Write a MATLAB program to simulate the performance of BPSK modulation in


AWGN and Rayleigh fading channels and compare with theoretical results
shown in Chap. 2.
2. Write a MATLAB program to simulate the performance of RS-coded SFH-
CDMA using BFSK modulation and compare with the uncoded theoretical
results shown in Chap. 2.
3. Write a MATLAB program to simulate the BER performance of an RS code in
AWGN and Rayleigh fading channels using BPSK modulation and compare
with the theoretical results shown in Fig. 4.4.
126 4 Linear Block Codes

4. Write a MATLAB program to simulate the BER performance of an RS code in


AWGN and Rayleigh fading channels using BFSK modulation and compare
with the theoretical results shown in Fig. 4.5.
5. Write a MATLAB program to simulate the BER performance of an RS code in
AWGN and Rayleigh fading channels using MFSK modulation for M > 2.

References

1. Forney, G.D.: On decoding BCH codes. IEEE Trans. Inf. Theory IT-11, 549–557 (1965)
2. Massey, J.L.: Shift register synthesis and BCH decoding. IEEE Trans. Inf. Theory IT-15(1),
122–127 (1969)
3. Berlekamp, E.R.: Algebraic Coding Theory, rev edn. Aegean Park Press, Laguna Hills (1984)
4. Chien, R.T.: Cyclic decoding procedure for the Bose-Chaudhuri-Hocquenghem codes. IEEE
Trans. Inf. Theory IT-10(1), 357–363 (1964)
5. Du, K.L., Swamy, M.N.S.: Wireless Communications: Communication Systems from RF
Subsystems to 4G Enabling Technologies. Cambridge University Press, Cambridge (2010)
Chapter 5
Convolutional Codes

In the convolutional coding, the message bits come in serially instead of large
blocks. The name convolutional codes are due to the fact that the redundant bits are
generated by the use of modulo-2 convolutions in a convolutional encoder.
The convolutional encoder can be considered as finite-state machine consisting of
an M-stage shift register and modulo-2 adders multiplexers. The rate of a convo-
lutional encoder with k inputs and n outputs is k=n. Often the manufacturers of
convolutional code chips specify the code by parameters ðn; k; LÞ: The quantity L is
called the constraint length of the code that represents the maximum number of bits
in a single-output stream that can be affected by any input bit.

5.1 Structure of Non-systematic Convolutional Encoder

Consider a rate 1/3 convolutional encoder as shown in Fig. 5.1. The binary data
stream xðnÞ ¼ ðxð0Þ; xð1Þ; xð2Þ; . . .Þ is fed into shift register containing a series of
memory elements. The contents of the memory elements are tapped and added
according to modulo-2 addition to create the coded output data streams

y1 ðnÞ ¼ ðy1 ð0Þ; y1 ð1Þ; y1 ð2Þ; . . .Þ;


y2 ðnÞ ¼ ðy2 ð0Þ; y2 ð1Þ; y2 ð2Þ; . . .Þ and
y3 ðnÞ ¼ ðy3 ð0Þ; y3 ð1Þ; y3 ð2Þ; . . .Þ:

Then, these output coded data streams are multiplexed to create a single-coded
output data stream

Y ¼ ðy1 ð0Þ; y2 ð0Þ; y3 ð0Þ; y1 ð1Þ; y2 ð1Þ; y3 ð1Þ; y1 ð2Þ; y2 ð2Þ; y3 ð2Þ; . . .Þ

Electronic supplementary material The online version of this chapter (doi:10.1007/978-81-


322-2292-7_5) contains supplementary material, which is available to authorized users.

© Springer India 2015 127


K. Deergha Rao, Channel Coding Techniques for Wireless Communications,
DOI 10.1007/978-81-322-2292-7_5
128 5 Convolutional Codes

Fig. 5.1 A rate 1/3 linear convolutional encoder

The output streams y1 ðnÞ; y2 ðnÞ and y3 ðnÞ can be represented as follows:

y1 ðnÞ ¼ xðnÞ þ xðn  1Þ


y2 ðnÞ ¼ xðnÞ þ xðn  2Þ
y3 ðnÞ ¼ xðnÞ þ xðn  1Þ þ xðn  2Þ

Example 5.1 Prove the encoder shown in Fig. 5.1 is linear convolutional encoder.
Proof Let the input x1 ðnÞ ¼ ð11101Þ. Then, the corresponding coded output
sequences

y1 ðnÞ ¼ ð1001110Þ
y2 ðnÞ ¼ ð1101001Þ
y3 ðnÞ ¼ ð1010011Þ

The convolutional code word corresponding to x1 ðnÞ ¼ ð11101Þ is then

Y1 ¼ ð111; 010; 001; 110; 100; 101; 011Þ

Let the input x2 ðnÞ ¼ ð10010Þ


The corresponding coded output sequences are

y1 ðnÞ ¼ ð1101100Þ
y2 ðnÞ ¼ ð1011010Þ
y3 ðnÞ ¼ ð1111110Þ
5.1 Structure of Non-systematic Convolutional Encoder 129

The convolutional code word corresponding to x2 ðnÞ ¼ ð10010Þ is

Y2 ¼ ð111; 101; 011; 111; 101; 011; 000Þ

Let the input xðnÞ ¼ x1 ðnÞ þ x2 ðnÞ ¼ ð01111Þ:


The corresponding coded output sequences are given as

y1 ðnÞ ¼ ð0100010Þ
y2 ðnÞ ¼ ð0100010Þ
y3 ðnÞ ¼ ð0101101Þ

The convolution code word corresponding to xðnÞ ¼ ð01111Þ is given as


follows:

Y ¼ ð000; 111; 010; 001; 001; 110; 011Þ


Y1 þ Y2 ¼ ð111; 010; 001; 110; 100; 101; 011Þ þ ð111; 101; 011; 111; 101; 011; 000Þ
¼ ð000; 111; 010; 001; 001; 110; 011Þ
¼Y

“A convolutional encoder is linear, if Y1 and Y2 are the code words corre-


sponding to inputs x1 ðnÞ and x2 ðnÞ, respectively, then ðY1 þ Y2 Þ is the code word
corresponding to the input x1 ðnÞ þ x2 ðnÞ.” Hence, the convolutional encoder in the
problem is proved to be linear.

5.1.1 Impulse Response of Convolutional Codes

The impulse response stream gi ðnÞ for the input xðnÞ ¼ ð1000 . . .Þ for the encoder
shown in Fig. 5.1 can be represented as follows:
The impulse response g1 ðnÞ can be represented by

g1 ðnÞ ¼ xðnÞ þ xðn  1Þ

The impulse response g2 ðnÞ can be represented by

g2 ðnÞ ¼ xðnÞ þ xðn  2Þ

The impulse response g3 ðnÞ can be represented by

g3 ðnÞ ¼ xðnÞ þ xðn  1Þ þ xðn  2Þ


130 5 Convolutional Codes

Thus, the impulse responses for the encoder are

g1 ðnÞ ¼ ð110Þ
g2 ðnÞ ¼ ð101Þ
g3 ðnÞ ¼ ð111Þ

Since there are two memory elements in the shift register of the encoder, each bit
in the input data stream can effect at most 3 bits, hence the length of the above
impulse response sequence is 3.
Since the convolutional encoder can be described by discrete convolutional
operation, if the information sequence xðnÞ is input to the encoder, the three outputs
are given by

y1 ðnÞ ¼ xðnÞ  g1 ðnÞ


y2 ðnÞ ¼ xðnÞ  g2 ðnÞ
y3 ðnÞ ¼ xðnÞ  g3 ðnÞ

where  represents the convolution operation. In the D-transform domain, the three
outputs can be represented as

Y1 ðDÞ ¼ X ðDÞG1 ðDÞ


Y2 ðDÞ ¼ X ðDÞG2 ðDÞ
Y3 ðDÞ ¼ X ðDÞG3 ðDÞ

The D denotes the unit delay introduced by the memory element in the shift
register. The use of D transform is most common in the coding literature. The delay
operator D is equivalent to the indeterminate z1 of the z-transform. The D trans-
forms of the impulse responses of the above encoder are

G1 ðDÞ ¼ 1 þ D
G2 ðDÞ ¼ 1 þ D2
G3 ðDÞ ¼ 1 þ D þ D2

Hence, the encoder shown in Fig. 5.1 can be described by a generator matrix

GðDÞ ¼ ½ G1 ðDÞ G2 ðDÞ G3 ðDÞ 

The transform of the encoder output can be expressed as

Y ðDÞ ¼ X ðDÞGðDÞ
5.1 Structure of Non-systematic Convolutional Encoder 131

where

Y ðDÞ ¼ ½ Y1 ðDÞ Y2 ðDÞ Y3 ðDÞ :

The G(D) is called the transfer function matrix of the encoder shown in Fig. 5.1.
Example 5.2 Determine the output code word of the encoder shown in Fig. 5.1
using the transfer function matrix if the input sequence X ¼ ð11101Þ.
Solution The D transform of the input sequence x is given by

X ðDÞ ¼ 1 þ D þ D2 þ D4

The D transform of the encoder output follows as

YðDÞ ¼ ½1 þ D þ D2 þ D4 ½1 þ D 1 þ D2 1 þ D þ D2 
 
¼ 1 þ D3 þ D4 þ D5 1 þ D þ D3 þ D6 1 þ D2 þ D5 þ D6

Inverting the D transform, we get

y1 ðnÞ ¼ ð1001110Þ
y2 ðnÞ ¼ ð1101001Þ
y3 ðnÞ ¼ ð1010011Þ

Then, the output code word y is

y ¼ ð111; 010; 001; 110; 100; 101; 011Þ

5.1.2 Constraint Length

The constraint length “L” of a convolutional code is the length of longest input shift
register with maximum number of memory elements plus one.

5.1.3 Convolutional Encoding Using MATLAB

The following MATLAB program illustrates the computation of the output code
word of the encoder shown in Fig. 5.1 for input sequence x2 ¼ ð10010Þ
132 5 Convolutional Codes

Fig. 5.2 A rate 1/3 systematic convolutional encoder

Program 5.1 MATLAB program to determine the output codeword of the encoder
shown in Fig. 5.1

The above program outputs the following codeword

Y ¼ ½1 1 1 1 0 1 0 1 1 1 1 1 1 0 1 0 1 1 0 0 0

5.2 Structure of Systematic Convolutional Encoder

A convolutional code in which the input data appear as a part of the code sequence
is said to be systematic. A rate 1/3 systematic convolutional encoder is shown in
Fig. 5.2.

5.3 The Structural Properties of Convolutional Codes

5.3.1 State Diagram

The contents of memory elements of a convolutional encoder provide mapping


between the input bits and the output bits. An encoder with j memory elements can
assume any one of 2j possible states. The encoder can only move between states.
5.3 The Structural Properties of Convolutional Codes 133

Fig. 5.3 State diagram of


non-systematic convolutional
encoder shown in Fig. 5.1

Each branch in the state diagram has a label of the form X=YYY. . .; where X is the
input bit that causes the state transition and YYY. . . is the corresponding output bits.
The encoder shown in Fig. 5.1 consists of two memory elements and hence the two
binary elements can assume any one of the four states designated by
S0  00; S1  10; S2  01; S3  11:
For the encoder shown in Fig. 5.1, the state diagram is shown in Fig. 5.3.

5.3.2 Catastrophic Convolutional Codes

A convolutional code is said to be catastrophic if its encoder generates all zero


output sequence for a nonzero input sequence. A catastrophic code can cause an
unlimited number of data errors for a small number of errors in the received code
word. The following Theorem [1] can be used to verify whether a convolutional
code is catastrophic.
Theorem 5.1 A rate 1/n convolutional code with transfer function matrix GðDÞ
with generated sequences having the transforms fG0 ðDÞ; G1 ðDÞ; . . .; Gn1 ðDÞg is
not catastrophic if and only if

GCDðG0 ðDÞ; G1 ðDÞ; . . .; Gn1 ðDÞÞ ¼ Dl

for some non-negative integer l.


Example 5.3 Determine whether the encoder shown in Fig. 5.4 generates a cata-
strophic convolutional code or not.
Solution From the encoder diagram shown in Fig. 5.4, the impulse responses are
g1 ¼ ð1110Þ and g2 ¼ ð1001Þ.
134 5 Convolutional Codes

Fig. 5.4 A rate −1/2 convolutional encoder

The transform of the generator sequence g1 ¼ ð1110Þ is G1 ðDÞ ¼ 1 þ D þ D2


The transform of the generator sequence g2 ¼ ð1001Þ is G2 ðDÞ ¼ 1 þ D3
 
GCD½G1 ðDÞ; G2 ðDÞ ¼ GCD 1 þ D þ D2 ; 1 þ D3
¼ 1 þ D þ D2
6¼ Dl

Thus, the code is catastrophic for any integer l, where GCD stands for greatest
common divisor.

5.3.3 Transfer Function of a Convolutional Encoder

The signal flow graph for a convolutional encoder can be obtained by splitting the
state S0 into a source node and sink node by modifying the labels of the branches.
For a given branch, we label Y i X j where j is the weight of the input vector X and i is
the weight of the output vector Y(the number of nonzero coordinates).
Example 5.4 Determine the transfer function of the systematic convolutional
encoder shown in Fig. 5.2.
Solution The state diagram of the systematic convolutional encoder is shown in
Fig. 5.5.

The signal flow graph of the above state diagram is shown in Fig. 5.6. In this
signal flow graph, the self loop at node S0 is eliminated as it contributes nothing to
the distance properties of a code relative to the all zero code sequence. Now, by
using the signal flow graph reduction techniques and Mason’s formula, the transfer
function can be obtained.
5.3 The Structural Properties of Convolutional Codes 135

Fig. 5.5 State diagram of systematic convolutional encoder shown in Fig. 5.2

Fig. 5.6 Signal flow graph of the above state diagram is shown in Fig. 5.5

By using reduction techniques, the above signal flow graph can be simplified as
136 5 Convolutional Codes

2
Further, the parallel branches with gains Y 2 and 1Y
Y X
2 X can be combined as a
Y 2X Y 2 XþY 2 Y 4 X
single branch with gain Y þ 1Y 2 X ¼ 1Y 2 X as follows:
2

Further, the loop can be replaced by a branch with gain

Y 2 XþY 2 Y 4 X
Y 2X þ Y 2  Y 4X
1Y 2 X
¼
1
2
Y 2 X Y XþY
2 Y 4 X
1Y 2 X
1  Y 2X  Y 4X2  Y 4X þ Y 6X2

Thus, the transfer function is given by

Y 2X þ Y 2  Y 4X
T ðY Þ ¼ Y 3 X Y
1  Y 2X  Y 4X2  Y 4X þ Y 6X2
Y 6X2 þ Y 6X  Y 8X2
¼
1  Y 2X  Y 4X2  Y 4X þ Y 6X2

Example 5.5 Consider the following non-systematic convolutional encoder and


determine its transfer function (Fig. 5.7).

Fig. 5.7 Encoder for a rate −1/3 convolutional code


5.3 The Structural Properties of Convolutional Codes 137

Solution The state diagram of the non-systematic convolutional encoder is shown


in Fig. 5.8.

The signal flow graph of the above state diagram is shown in Fig. 5.9.

Fig. 5.8 State diagram of non-systematic convolutional encoder shown in Fig. 5.7

Fig. 5.9 Signal flow graph of the state diagram shown in Fig. 5.8
138 5 Convolutional Codes

By using reduction techniques, the above signal flow graph can be simplified as
follows:

2
Y X
Further, the parallel branches with gains Y 2 and 1Y 2 X can be combined as a
Y 2X Y 2 Y 4 XþY 2 X
single branch with gain Y þ 1Y 2 X ¼ 1Y 2 X as follows:
2

Further, the loop can be replaced by a branch with gain

Y 2 Y 4 XþY 2 X
Y 2  Y 4X þ Y 2X
1Y 2 X
¼
1  X Y Y
2 4 XþY 2 X
1Y 2 X
1  2Y 2 X þ Y 4 X 2  Y 2 X 2

Thus, the transfer function is given by

Y 2  Y 4X þ Y 2X
T ðY; X Þ ¼ Y 3 X Y3
1  2Y 2 X þ Y 4 X 2  Y 2 X 2
Y 8 X  Y 10 X 2 þ Y 8 X 2
¼
1  2Y 2 X þ Y 4 X 2  Y 2 X 2
5.3 The Structural Properties of Convolutional Codes 139

5.3.4 Distance Properties of Convolutional Codes

An upper bound on the minimum free distance of a rate 1=n convolutional code is
given by [2]
 
2l1
df  max l ðL þ l  1Þn ð5:1Þ
l[1 2  1

where b xc denotes the largest integer contained in x.


The transfer function also yields the distance properties of the code. The mini-
mum distance of the code is called the minimum free distance denoted by df. The df
is the lowest power in the transfer function.
In Example 5.4, since the lowest power in the transfer function is 6, the df for the
systematic convolutional encoder considered in this example is 6, whereas in
Example 5.5, the lowest power in the transfer function is 8. Hence, the minimum
free distance for the non-systematic encoder considered in this example is 8.
From the above two examples, it is observed that the minimum free distance for
non-recursive systematic convolutional code is less than that of a non-recursive
non-systematic convolutional codes of the same rate and constraint length. The
bounds on the minimum free distance for various codes are developed in [3, 4].
The bounds on the free distance for various systematic and non-systematic codes of
the same rate and constraint length are tabulated in Table 5.1.

5.3.5 Trellis Diagram

The state diagram does not contain time information required in decoding. Hence,
trellis diagram is developed to overcome the disadvantage. The trellis diagram is an
expansion of state diagram by adding a time axis for time information. In the trellis

Table 5.1 The bounds on the


Rate Constraint Systematic Non-systematic
free distance for various
length codes codes maximum
systematic and non-
maximum free free distance
systematic codes of the same
distance
rate and constraint length
1/3 2 5 5
3 6 8
4 8 10
5 9 12
1/2 2 3 3
3 4 5
4 4 6
5 5 7
140 5 Convolutional Codes

diagram, the nodes are arranged vertically representing the states of the encoder and
each node corresponding to a state of the encoder after a transition from the pre-
vious node for an input bit, the horizontal axis represents time, and the labels on the
branches represent the encoder output bits for a state transition and the input bit
causing the transition.
For a ðn; kÞ convolutional code with memory order m, there are 2m nodes at each
time increment t and there are 2k branches leaving each node for t  m. For t [ m,
there are also 2k branches entering the node.
For an encoder with single input sequence of B bits, the trellis diagram must
have B þ m stages with the first and last stages starting and stopping, respectively,
in state S0 . Thus, there are 2B distinct paths through trellis each corresponding to the
code word of the length nðB þ mÞ.
Example 5.6 The impulse responses of a convolutional encoder are given by
g1 ¼ ½1 0 1; g2 ¼ ½1 1 1
1. Draw the encoder
2. Draw the state diagram
3. Draw the trellis diagram for the first three stages.

Solution
1. From the impulse responses g1 ¼ ½1 0 1; g2 ¼ ½1 1 1, the output stream y1 ðnÞ
can be represented as follows:

y1 ðnÞ ¼ xðnÞ þ xðn  2Þ

The output stream y2 ðnÞ can be represented as follows:

y2 ðnÞ ¼ xðnÞ þ xðn  1Þ þ xðn  2Þ

Hence, the corresponding encoder is as follows:


5.3 The Structural Properties of Convolutional Codes 141

2. This rate −1/2 encoder has two memory cells. So, the associated state diagram
has four states as shown below.

3. The trellis diagram is an extension of the state diagram that explicitly shows the
passage of time. The first three stages of the trellis diagram corresponding to the
encoder is as follows:

Example 5.7 The impulse responses of a convolutional encoder are given by


g1 ¼ ½1 1 1; g2 ¼ ½1 1 1; g3 ¼ ½1 1 0
1. Draw the encoder
2. Draw the state diagram
3. Draw the trellis diagram for the length 3 input sequence.
142 5 Convolutional Codes

Solution
1. From the impulse responses g1 ¼ ½1 1 1; g2 ¼ ½1 1 1; g3 ¼ ½1 1 0, the output
stream y1 ðnÞ can be represented as follows:

y1 ðnÞ ¼ xðnÞ þ xðn  1Þ þ xðn  2Þ

The output stream y2 ðnÞ can be represented as follows:

y2 ðnÞ ¼ xðnÞ þ xðn  1Þ þ xðn  2Þ

The output stream y3 ðnÞ can be represented as follows:

y3 ðnÞ ¼ xðnÞ þ xðn  1Þ

Hence the corresponding encoder is as follows:

2. This rate −1/3 encoder has three memory cells. So, the associated state diagram
has four states as shown below.
5.3 The Structural Properties of Convolutional Codes 143

3. The trellis diagram is an extension of the state diagram that explicitly shows the
passage of time. The first five stages of the trellis diagram corresponding to the
encoder are as follows:

S3 . 110 . 110 . 110 .


001 001 001

S2
000
. 000
.
000
. 000
.
111 111 111 111
001 001 001
S1
. . 110 . 110
. 110
.
111 111 111 111 111
S0
. 000 .
t=1
000 . 000 . 000 .
t=4
000 .
t=0 t=2 t=3 t=5
time

5.4 Punctured Convolutional Codes

The computational complexity is an issue for implementation of Viterbi decoder for


high-rate convolutional codes. This issue can be avoided by using punctured
convolutional codes. The puncturing process deletes periodically selected coded
bits from one or more of the output streams of a convolutional encoder. For a given
fixed low rate convolutional encoder structure, high-rate codes can be achieved by
puncturing the output of low rate convolutional encoder. The puncturing pattern is
specified by a puncturing matrix P of the form
144 5 Convolutional Codes

2 3
P 11 P 12 . . . P 1P
6 P 21 P 22 . . . P 2P 7
6 7
P ¼ 6 .. .. .. .. 7 ð5:2Þ
4 . . . . 5
P n1 P n2 . . . P nP

The puncturing matrix will have n rows, one for each output stream in an
encoder with n output bits. The number of columns in the puncturing matrix is the
number of bits over which the puncturing pattern repeats. The encoder transmits the
bit corresponding to P ij ¼ 1 and detects the bit corresponding to P ij ¼ 0. The
search for optimum punctured codes has been done by [5–7].
Example 5.8 Construct a rate 2/3 code by puncturing the output of the rate 1/2,
non-systematic convolutional encoder of Example 5.6.
Solution To generate rate 2/3 code from the rate 1/2 convolutional code with
constraint length 3, the puncturing matrix is given as follows:
 
1 0

1 1

The zero entity in the second column of the second row indicates that every
second bit in the output y1 ðnÞ is to be punctured. The generation of rate 2/3 code
from a rate ½ convolutional code is shown in Fig. 5.10. The punctured encoder
generates 6 code bits for every 4 message bits and thus the punctured code rate is 2/3.

Fig. 5.10 Generation of rate 2/3 code from a rate ½ convolutional code
5.5 The Viterbi Decoding Algorithm 145

5.5 The Viterbi Decoding Algorithm

The Viterbi algorithm is a maximum likelihood decoding algorithm for convolutional


codes involve in finding the path largest metric through the trellis by comparing the
metrics of all branch paths entering each state with the corresponding received vector
r iteratively. Let Sj;t be the node corresponding to the state Sj at time t, and with an
assigned value Mj;t ðrnyÞ. Let m be the memory order. A distance between the
received pair of bits and branch output bits is defined as branch metric and sum of
metrics of all the branches in a path is defined as path metric.
Partial path metric for a path is obtained by summing the branch metrics for the
first few branches that the path traverses. For example, consider the trellis diagram
of Example 5.7, the beginning of the trellis is as follows:

Each node in the trellis is assigned a number. This number is the partial path
metric of the path that starts at state S0 at time t ¼ 0 and terminates at that node.
Let Mj;t ðrnyÞ be the partial path metric entering the node corresponding to the
state j at time t. For example, in the accompanying drawing, the label Y corresponds
to the two-branch path that terminates at state S1 at time t ¼ 2. Given that the output
bits corresponding to this path consist of three zeros followed by three ones, and the
received sequence r with received bits of the form rk ðtÞ indicating the kth bit in the
sequence at time t.

M0;1 ðr=yÞ ¼ Mðr1 ð1Þ=0Þ þ Mðr2 ð1Þ=0Þ þ Mðr3 ð1Þ=0Þ


M1;1 ðr=yÞ ¼ Mðr1 ð1Þ=1Þ þ Mðr2 ð1Þ=1Þ þ Mðr3 ð1Þ=1Þ
M0;2 ðr=yÞ ¼ M0;1 ðr=yÞ þ Mðr1 ð2Þ=0Þ þ Mðr2 ð2Þ=0Þ þ Mðr3 ð2Þ=0Þ
M1;2 ðr=yÞ ¼ M0;1 ðr=yÞ þ Mðr1 ð2Þ=1Þ þ Mðr2 ð2Þ=1Þ þ Mðr3 ð2Þ=1Þ

The flowchart for the iterative decoding Viterbi algorithm is shown in Fig. 5.11.
146 5 Convolutional Codes

Fig. 5.11 Viterbi algorithm


5.5 The Viterbi Decoding Algorithm 147

5.5.1 Hard-decision Decoding

In hard-decision decoding, we will examine each received signal and make a “hard”
decision to decide whether the transmitted signal is zero or one. These decisions
form the input to the Viterbi decoder. From the decoder’s perspective and by
considering the channel to be memory less, the compilation of the likelihood
functions in a table is the primary step in defining the bit metrics for the channel.
These conditional probabilities are first converted into log likelihood functions and
then into bit metrics.
For the BSC case shown in Fig. 1.3 of Chap. 1, the path metric is simply a
Hamming distance between code word y and received word r.
Then, the bit metric for BSC case is as follows:

M ðr=yÞ r¼0 r¼1


y¼0 1 0
y¼1 0 1

5.5.2 Soft-decision Decoding

In soft-decision decoding, “side information” is generated by the receiver bit


decision circuitry and the receiver utilizes this. Instead of assigning zero or one to
each received noisy binary signal as in hard-decision decoding, four regions,
namely “strong-one,” “weak-one,” “strong-zero,” and “weak-zero, are established
for soft-decision decoding. Intermediate values are given to signals for which the
decision is less clear. An increase in coding gain of 2–3 dB over the hard-decision
Viterbi decoder is provided by soft-decision decoding for an additive white
Gaussian noise channel.
Figure 5.12 shows a discrete symmetric channel where the underlined zero and
one indicate the reception of a clear, strong signal, while the non-underlined pair
denotes the reception of a weaker signal and the receiver will assigns one of the four
values to each received signal.
A hard limiter makes the bit decisions in a hard-decision receiver, whereas a
multiple-bit analog-to-digital converter (ADC) is used in soft-decision receivers for
this purpose. The channel model shown in Fig. 5.12 uses a 2-bit ADC in the
decision circuitry. The soft-decision decoding is almost similar to the hard-decision
decoding but uses the increased number (and resolution) of the bit metrics
Consider the following values for the conditional probabilities

pðr=yÞ r¼0 r¼0 r¼1 r¼1


y¼0 0.50 0.25 0.15 0.05
y¼1 0.05 0.15 0.25 0.50
148 5 Convolutional Codes

Fig. 5.12 A discrete


symmetric channel model . 0
P(0|0)

0 . P(0|0)
P(0|1)
.0
P(0|1)
Transmitted Received
Symbol Symbol

1 . P(1|1)
P(1|0)
.1
P(1|0)
P(1|1)
.1

They provide the following log likelihood functions.

log2 pðr=yÞ r¼0 r¼0 r¼1 1


y¼0 −0.73 −2 −2.73 −4.32
y¼1 −4.32 −2.73 −2 −0.73

Using the expression below, we obtain a set of bit metrics that can be easily
implemented in digital hardware.

M ðr=yÞ ¼ 1:5½log2 M ðr=yÞ  log2 M ð0:05Þ

M ðr=yÞ r¼0 r¼0 r¼1 r¼1


y¼0 5 6 2 0
y¼1 0 2 6 5

Example 5.9 Consider the encoder shown in Fig. 5.1.

1. Construct the Trellis diagram for the length 3 input sequence.


2. If a code word from the encoder is transmitted over a BSC and that the received
sequence is r ¼ ð110; 110; 110; 111; 010Þ; find the maximum likelihood code
using Viterbi hard-decision decoding algorithm.
5.5 The Viterbi Decoding Algorithm 149

Solution
1. From the state diagram shown in Fig. 5.1, for the encoder of Fig. 5.1, the
following trellis diagram is constructed.

2. For BSC, the bit metrics chosen for hard decision are as follows:

M ðr=yÞ r¼0 r¼1


y¼0 1 0
y¼1 0 1

Using the above bit metrics and following the Viterbi decoding algorithm pro-
cedure shown in Fig. 5.11, the results of the decoding operation using hard-decision
decoding are shown in following figure.
150 5 Convolutional Codes

In the above figure, the maximum likelihood code word is the word corre-
sponding to the ML path denoted by thick line in the above trellis diagram. Thus,
the maximum likelihood code word is given as follows:

Y ¼ ð111; 010; 110; 011; 000Þ

Example 5.10 Considering the convolutional encoder of Example 5.5.

When the convolutional code is transmitted over a symmetric memoryless channel


with the following bit metrics shown below in the table, find the transmitted code
word for the following received code word (10; 01; 10; 11; 00; 10; 11; 00Þ using
soft-decision decoding.

M ðr=yÞ r¼0 r¼0 r¼1 r¼1


y¼0 0 1 3 6
y¼1 6 3 1 0
5.5 The Viterbi Decoding Algorithm 151

Solution Using the above bit metrics and following the Viterbi decoding algorithm
procedure shown in Fig. 5.11, the results of the decoding operation using soft-
decision decoding are shown in the below figure.

In the above figure, the maximum likelihood code word is the word corre-
sponding to the ML path denoted by thick line in the above trellis diagram. Thus,
the maximum likelihood code word is Y ¼ ð00; 11; 10; 01; 01; 10; 11; 00Þ.

5.6 Performance Analysis of Convolutional Codes

5.6.1 Binary Symmetric Channel

The lower bound on the bit-error rate in the convolutional codes on the binary
symmetric channel with a crossover probability P is given by [8]
8 
> P
df
df
>
> 1
Pk ð1  PÞdf k d odd
<k k
k¼ðdf þ1Þ=2
Pb ¼  
>
> df Pdf
df
> 1
: 2k pdf =2 ð1  PÞdf =2 þ 1k Pk ð1  PÞdf k ; d even
df =2 k¼df =2þ1 k
ð5:3Þ
152 5 Convolutional Codes

whereas the upper bound on bit-error rate is given by [5]


1 @T ðY; X Þ

Pb \ ð5:4Þ
k @X
Y¼2pffiffiffiffiffiffiffiffiffiffiffi
Pð1PÞ;X¼1

Example 5.11 Consider the convolutional encoder from Example 5.6. Compute the
upper bound and lower bound on BER for a binary symmetric channel with
crossover probability P ¼ 0:01:
Solution The signal flow graph of the encoder considered in Example 5.6 can be
represented as follows:

By using reduction techniques, the signal flow graph can be simplified as


follows:

The transfer function is given by

Y 5X
TðY; XÞ ¼
1  2YX
5.6 Performance Analysis of Convolutional Codes 153

@T ðY; X Þ

Y5
¼
@X
X¼1 ð1  2Y Þ2

Upper bound on bit-error probability:



1 @T ðX; Y Þ

Y5

P\ ¼

k @y
Y¼2pffiffiffiffiffiffiffiffiffiffiffi
Pð1PÞ;X¼1 k ð1  2Y Þ2
pffiffiffiffiffiffiffiffiffiffiffi
Y¼2 Pð1PÞ;

Since k = 1 for this example


Y5

Pb \

ð1  2Y Þ2
Y¼0:198997
4
¼ 8:61  10

Lower bound on bit-error probability: df ¼ 5

X
5
pb ¼ ð5k Þpk ð1  pÞ5k ¼ 10p3 ð1  pÞ2 þ5p4 ð1  pÞ þ p5
k¼3
¼ 9:8501  106

5.6.2 AWGN Channel

The upper and lower bounds on the bit-error rate at the output of the decoder in
AWGN channel with BPSK for the unquantized soft decoding is given by
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1 df Eb =N0 2df Rc Eb @T ðY; X Þ

Pb  e Q ð5:5Þ
k N0 @X
Y¼eEb =N0 ;X¼1

Since the received signal is converted to a sequence of zeros and ones before it is
sent to the decoder, for hard-decision decoding AWGN channel with BPSK
modulation can be seen as BSC crossover probability p given by
rffiffiffiffiffiffiffiffiffiffiffiffiffi
2Eb
P¼Q Rc ð5:6Þ
N0

Substitution of the above P in Eq. (5.4) yields upper bound for the hard-decision
decoding in AWGN channel with BPSK modulation.
The coding gain of a convolutional code over an uncoded BPSK or QPSK
system is upper bounded by [7]
154 5 Convolutional Codes


Rdf
Coding gain ¼ 10 log10 dB for hard-decision ð5:7aÞ
2

¼ 10 log10 ðRdf ÞdB for soft-decision ð5:7bÞ

Hence, the soft-decision decoding introduces 3 dB increases in the coding gain


over the hard-decision decoding.
The BER performance of soft-decision and hard-decision decoding is compared
through the following example.
Example 5.12 Consider the encoder used in the Example 5.10, and compare the
BER performance of soft-decision and hard-decision decoding in an AWGN
channel with BPSK modulation.
Solution The following MATLAB program is written and used for comparison of
BER performance for different Eb =N0 using soft-decision and hard-decision
decoding. The comparison of BER performances with an encoder used in the
Example 5.10 for hard-decision and soft-decision decoding over an AWGN channel
is shown in Fig. 5.13
5.6 Performance Analysis of Convolutional Codes 155

5
10
Uncoded
Soft decision Upper bound
Hard decision Upper bound

0
10

-5
10
Bit Error Rate

-10
10

-15
10

-20
10
4 5 6 7 8 9 10 11 12
Eb/No, dB

Fig. 5.13 BER performance comparisons of hard-decision and soft-decision decoding over an
AWGN channel

From Fig. 5.13, it is observed that soft-decision decoding offers 3 dB increasing


coding gain in over hard decision which satisfies the Eqs. (5.7a) and (5.7b).

5.6.3 Rayleigh Fading Channel

The union upper bound on the bit-error probability for better BER estimate for
convolutional codes is given by [9]

X
1
Pb \ cd Pd ð5:8Þ
d¼df

where cd is the information error weight for error events of distance d, and df is the
free distance of the code. Pd is the pairwise error probability. For an AWGN
channel, Pd is given by [9]
156 5 Convolutional Codes

rffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Eb
Pd ¼ Q 2dR ð5:9Þ
N0

where R is the code rate, Eb is received energy per information bit, and N0 is the
double-sided power spectral density of the noise.
The pair wise error probability in a Rayleigh fading channel is given by [9]

d 1 
X
d1þk
Pd ¼ ðPe Þd ð1  P e Þk ð5:10Þ
k¼0
k
qffiffiffiffiffiffiffiffiffiffi
cb R
where Pe ¼ 12 1  1þc R where cb is the average of
Eb
N0
b

A comparison of the upper bound on the BER in the AWGN and flat Rayleigh
fading channels for ODS convolutional codes [9] with R = 1/4 and constraint length
of seven is shown in Fig. 5.14.

0
10

-5
10
Bit Error Rate

-10
10
uncoded
AWGN
Rayleigh

-15
10
1 2 3 4 5 6 7 8
Eb/No, dB

Fig. 5.14 A comparison of the upper bound on the BER in the AWGN and flat Rayleigh fading
channels ODS convolutional codes with R = 1/4 and constraint length of 7
5.7 Problems 157

5.7 Problems

1. Consider the encoder shown in Fig. 5.15 and determine the output code word
using D transform for the input sequence xðnÞ ¼ ð1001Þ.
2. Consider the encoder shown in Figure 5.15 and
i. Draw the state diagram for the encoder.
ii. Draw the trellis diagram for the encoder.
iii. Find the transfer function and the free distance of the encoder.
158 5 Convolutional Codes

Fig. 5.15 A rate −1/2 convolutional encoder

3. Consider the encoder shown in Fig. 5.16


i. Find impulse response
ii. Find the transfer function matrix
iii. Use the transfer function matrix to determine the code word associated
with the input sequence x ¼ ð11; 10; 01Þ
4. Consider an encoder with impulse responses g1 ¼ ð1111Þ and g2 ¼ ð1111Þ.
Determine whether the encoder generates a catastrophic convolutional code.
5. Construct a rate ¾ code by puncturing the output of the rate 1/3, for systematic
convolutional encoder shown in Fig. 5.2. And draw the trellis diagram of the
punctured code.
6. If a code word from the encoder of Example 5.6 is transmitted over a BSC and
that the received sequence is r ¼ ð101; 100; 001; 011; 111; 101; 111; 110Þ, find
the maximum likelihood code using Viterbi hard-decision decoding algorithm.

Fig. 5.16 A rate −2/3 convolutional encoder


5.7 Problems 159

7. If a code word from the encoder of Example 5.6 is transmitted over a BSC and
that the received sequence is r ¼ ð101; 100; 001; 011; 110; 110; 111; 110Þ, find
the maximum likelihood code using Viterbi soft-decision decoding algorithm.

MðrjyÞ r¼0 r¼0 r¼1 r¼1


y¼0 5 4 2 0
y¼1 0 2 4 5

5.8 MATLAB Exercises

1. Write a MATLAB program to simulate BER performance of a convolutional


encoder of your choice using hard-decision and soft-decision decoding over an
AWGN channel and comment on the results.
2. Write a MATLAB program to simulate BER performance of a convolutional
encoder of your choice using soft-decision decoding over an AWGN and
Rayleigh Fading channel and comment on the results.

References

1. Massey, J.L., Sain, M.K.: Inverse of linear sequential circuits. IEEE Trans. Comput. C-17,
330–337 (1968)
2. Heller, J.A.: Short constraint length convolutional codes, Jet Propulsion Laboratory, California
Institute of Technology, Pasadena, CA Space Program Summary 37–54, Vol. 3, pp. 171–174,
December (1968)
3. Costello, D.J.: Free distance bounds for convolutional codes. IEEE Trans. Inf. Theory IT-20(3),
356–365 (1974)
4. Forney Jr, G.D.: Convolutional codes II: maximum likelihood decoding. Inf. Control 25, 222–
266 (1974)
5. Cain, J., Clark, G., Geist, J.: Punctured convolutional codes of rate (n-1)/n and simplified
maximum likelihood decoding. IEEE Trans. Inf. Theory IT-25(1), 97–100 (1979)
6. Yasuda, Y., Kashiki, K., Hirata, Y.: High-rate punctured convolutional codes for soft decision
Viterbi decoding. IEEE Trans. Commun. 3, 315–319 (1984)
7. Hole, K.: New short constraint length rate (N-1)/N punctured convolutional codes for soft
decision Viterbi decoding. IEEE Trans. Commun. 9, 1079–1081 (1988)
8. Wicker, S.B.: Error Control Systems for Digital Communication and Storage. Prentice Hall,
New Jersey (1995)
9. Franger, P., Orten, P., Ottosson, T.: Convolutional codes with optimum distance spectrum.
IEEE Commun. Lett. 3(11), 317–319 (1999)
Chapter 6
Turbo Codes

The groundbreaking codes called turbo codes are introduced in [1, 2]. The best-known
convolutional codes are mostly non-systematic. However, in turbo encoders,
systematic convolutional codes are used. Turbo codes are generated by using the
parallel concatenation of two recursive systematic convolutional (RSC) encoders.
This chapter discusses turbo encoding, iterative turbo decoding, and performance
analysis of turbo codes.

6.1 Non-recursive and Recursive Systematic Convolutional


Encoders

A convolutional code is said to be systematic if the input sequence is reproduced


unaltered in the output code word. The following rate-1/2 convolutional encoder is
an example for a systematic convolutional encoder (Fig. 6.1).

6.1.1 Recursive Systematic Convolutional (RSC) Encoder

Consider the conventional convolutional encoder with rate 1/2 and constraint length
3 as shown in Fig. 6.2.
The generator sequences of the above non-recursive non-systematic encoder are
g1 ¼ ½111 and g2 ¼ ½101.
The state diagram representation of the above non-recursive encoder is shown in
Fig. 6.3.

Electronic supplementary material The online version of this chapter (doi:10.1007/978-81-


322-2292-7_6) contains supplementary material, which is available to authorized users.

© Springer India 2015 161


K. Deergha Rao, Channel Coding Techniques for Wireless Communications,
DOI 10.1007/978-81-322-2292-7_6
162 6 Turbo Codes

Fig. 6.1 Non-recursive systematic convolutional (SC) encoder

Fig. 6.2 Non-recursive non-


systematic convolutional
(NSC) encoder

Fig. 6.3 NSC encoder state 0/00


diagram

S0 1/11
0/11
1/00
S2 S1
0/10
1/01
0/01 S3

1/10

The equivalent RSC encoder of the non-recursive non-systematic encoder of


Fig. 6.2 is shown in Fig. 6.4. It is obtained by feeding back the contents of the
 g 
memory elements to the input with the generator function G ¼ 1 g21 ¼
h i
2 :
1þD2
1 1þDþD
The state diagram of the RSC encoder is shown in Fig. 6.5.
6.1 Non-recursive and Recursive Systematic Convolutional Encoders 163

Fig. 6.4 Recursive systematic convolutional (RSC) encoder

Fig. 6.5 RSC encoder state 0/00


diagram

S0 1/11
1/11
0/00
S2 S1

1/10
0/01
0/01 S3

1/10

From Figs. 6.3 and 6.5, it is clear that the state diagrams of the NSC and RSC
encoders are very similar. Further, both the codes have the same minimum free
distance and trellis structure. Hence, the first event error probability is same for both
the codes; however, bit error rates (BERs) are different as BER depends on the
encoder’s input–output correspondence. At low signal-to-noise ratios Eb =No , the
BER for a RSC code is lower than that of the corresponding NSC code.

6.2 Turbo Encoder

A turbo encoder structure consists of two identical RSC encoders in parallel con-
catenation as shown in Fig. 6.6. It is a rate 1/3 encoder.
The two RSC encoders work synergistically in parallel. The RSC encoder 1
takes the data bits x and produces a low-weight parity bits ðp1k Þ from them. The
RSC encoder 2 gets the data bits x scrambled by an interleaver and computes high-
weight parity bits ðp2k Þ from the scrambled input bits. Thus, moderate weight turbo
164 6 Turbo Codes

Fig. 6.6 Turbo encoder

code is generated with combined low-weight code from encoder 1 and high-weight
code from encoder 2. Finally, the original input sequence x along with the two
strings of parity bits is transmitted over the channel.

6.2.1 Different Types of Interleavers

The BER performance of turbo codes can be improved significantly by using


interleaver as it affects the distance properties of the code by avoiding low-weight
code words [3].
Block Interleaver
The block interleaver is one of the most frequently used types of interleavers in
communication systems. It fills a matrix with the input data bit stream row-by-row
and then sends out the contents column-by-column. A block interleaver is shown in
Fig. 6.7. It writes in [0 0 … 1 0 1 … 0 … 1 … 1 0 1 … 0 1] and reads out [0 1 … 1
0 0 … 1 … 1 … 0 0 0 … 1 1].
Pseudo-random Interleaver
A random interleaver maps the input sequence according to the permutation order
using a fixed random permutation. A random interleaver with input sequence of
length 8 is shown in Fig. 6.8.

Fig. 6.7 Block interleaver


6.2 Turbo Encoder 165

Write In 0 1 1 0 1 0 0 1

Fixed
Random 1 3 6 8 2 7 4 5
Permutation

Read Out 0 1 0 1 1 0 0 1

Fig. 6.8 Random interleaver

6.2.2 Turbo Coding Illustration

We consider the following numerical examples to illustrate turbo coding. The


operation of the encoders used in these examples is characterized by the corre-
sponding trellis diagram.
Example 6.1 For the turbo encoder shown in Fig. 6.9, find the output code word for
the input data sequence x ¼ f1 1 0 0g assuming the RSC encoder 1 trellis is ter-
minated. Let the interleaver be f5; 2; 4; 0; 1; 3g.

Fig. 6.9 Turbo encoder of Example 6.1


166 6 Turbo Codes

Table 6.1 Transitions for


turbo encoder of Example 6.1 x(n) State at n State at n þ 1 P1 ðnÞ
0 S0 (00) S0 0
1 S1 1
0 S1 (10) S3 1
1 S2 0
0 S2 (01) S1 0
1 S0 1
0 S3 (11) S2 1
1 S3 0

Solution The two binary memory elements can assume any one of four states
S0  00; S1  10; S2  01; S3  11 as shown in Table 6.1. The trellis diagram
corresponding to Table 6.1 is shown in Fig. 6.10.
The input sequence is fed to the RSC encoder 1. The resultant path through the
trellis is shown in Fig. 6.11.
Now, the input is fed through the following pseudo-random interleaver shown in
Fig. 6.12
The block of permuted data bits is then fed into RSC encoder 2 resulting in the
path through the trellis is shown in Fig. 6.13.
The encoder output data bits and the parity bits are mapped to symbols as shown
in 6.2.
Example 6.2 The UMTS (universal mobile telecommunications
h system)
i standard
1þDþD3
turbo encoder with RSC encoder generator function G ¼ 1  1þD2 þD3 is shown in
Fig. 6.14, find the output code word for the input data sequence x = {1 1 0 0} assuming
RSC encoder 1 trellis is terminated. Let the interleaver be {2, 6, 4, 5, 0, 1, 3}.

Fig. 6.10 Trellis diagram for


encoder of Example 6.1
6.2 Turbo Encoder 167

S3

01
01

S2

10 00

S1
11
11

S0

Fig. 6.11 Trellis path corresponding to input sequence of Example 6.1

Fig. 6.12 Pseudo-random interleaver of Example 6.1

Solution The two binary memory elements can assume any one of four states
S0  000; S1  100; S2  010; S3  110; S4  001; S5  101; S6  011; S7  111:
as shown in Table 6.3. The trellis diagram corresponding to Table 6.3 is shown in
Fig. 6.15.
The input sequence is fed to the RSC encoder 1. The resultant path through the
trellis is shown in Fig. 6.16.
168 6 Turbo Codes

S3

01
01 01
S2

S1
11 11 11

S0

Fig. 6.13 Trellis path corresponding to interleaved input sequence of Example 6.1

Table 6.2 Output of encoder of Example 6.1

Now, the input is fed through the following pseudo-random interleaver shown in
Fig. 6.17.
The block of permuted data bits is then fed into RSC encoder 2 resulting in the
path through the trellis is shown in Fig. 6.18.
The encoder output data bits and the parity bits are mapped to symbols as shown
in 6.4.

6.2.3 Turbo Coding Using MATLAB

Example 6.3 Consider the turbo encoder given in [1] using the RSC encoders with
the generator function.
6.2 Turbo Encoder 169

Fig. 6.14 Turbo encoder of Example 6.2

Table 6.3 Transitions for


turbo encoder of Example 6.2 xðnÞ State at n State at n þ 1 P1 ðnÞ
0 S0 (000) S0 0
1 S1 1
0 S1 (100) S2 1
1 S3 0
0 S2 (010) S5 1
1 S4 0
0 S3 (110) S7 0
1 S6 1
0 S4 (001) S1 0
1 S0 1
0 S5 (101) S3 1
1 S2 0
0 S6 (011) S4 1
1 S5 0
0 S7 (111) S6 0
1 S7 1
170 6 Turbo Codes

Fig. 6.15 Trellis diagram for


encoder of Example 6.2

h i
G¼ 1 1þD4
1þDþD2 þD3 þD4

(a) Assuming RSC encoder 1 trellis is terminated and determine the code word
produced by the unpunctured encoder for the message x ¼ ½1 0 0 1 1 0 using
MATLAB. Let the interleaver be ½3; 7; 6; 2; 5; 10; 1; 8; 9; 4:
(b) Repeat (a) for punctured encoder with
 rate 1/2  
1 1 0 0
The puncturing patterns are Pu1 ¼ ; Pu2 ¼ :
1 0 0 1
6.2 Turbo Encoder 171

S7

00
S6
00

S5
01

S4

S3

11
10
S2

S1
11

S0 00
Fig. 6.16 Trellis path corresponding to input sequence of Example 6.2

Fig. 6.17 Pseudo-random interleaver of Example 6.2


172 6 Turbo Codes

S7

S6

01
11
S5

S4

S3

10
S2

S1
11

S0 00 00 00
t=0 t=1 t=2 t=3 t=4 t=5 t=6 t=7

Fig. 6.18 Trellis path corresponding to interleaved input sequence of Example 6.2

Table 6.4 Output of the encoder of Example 6.2


6.2 Turbo Encoder 173

(c) Assuming RSC encoder 1 trellis is unterminated and determine the code word
produced by the unpunctured encoder for the message x ¼ ½1 0 0 1 1 0 1 0 1 0
using MATLAB. Let the interleaver be ½3; 7; 6; 2; 5; 10; 1; 8; 9; 4:
(d) Repeat (c) for punctured encoder with rate 1/2 with the puncturing patterns
same as in (b).

Solution The following MATLAB program and MATLAB functions are written
and used to find the code words produced by the unpunctured and punctured
encoders. For (a) and (b), the program is to be run with ip ¼ ½1 0 0 1 1 0 and
term1 ¼ 1, whereas for (c) and (d), the program is to be run with ip ¼
½1 0 0 1 1 0 1 0 1 0 and term1 ¼ 1.
(a) The unpunctured turbo code obtained by running the MATLAB program and
functions is

x ¼ ½1 0 0 1 1 0 1 1 0 0;
p1 ¼ ½1 1 0 1 1 1 0 1 0 0; p2 ¼ ½0 1 1 0 1 0 1 1 0 1

(b) The punctured turbo code obtained is

x ¼ ½1 0 0 1 1 0 1 1 0 0; p1 ¼ ½1 0 1 0 0; p2 ¼ ½1 0 0 1 1

Since for every 10 information bits, there are 20 code word bits (10
information bits and five parity bits for each RSC encoder; thus, the rate of the
punctured turbo code is 1/2).
(c) The unpunctured turbo code obtained is

x ¼ ½1 0 0 1 1 0 1 0 1 0;
p1 ¼ ½1 1 0 1 1 1 0 0 0 1; p2 ¼ ½0 1 1 0 1 0 1 0 0 0

(d) The punctured turbo code obtained is

x ¼ ½1 0 0 1 1 0 1 0 1 0; p1 ¼ ½1 0 1 0 0; p1 ¼ ½1 0 0 0 0
174 6 Turbo Codes

Program 6.1 MATLAB program to find output code word


6.2 Turbo Encoder 175

MATLAB function encodedbit.m

MATLAB function turboencode.m


176 6 Turbo Codes

6.3 Turbo Decoder

The iterative decoding turbo decoder block diagram is shown in Fig. 6.19.
During the first iteration, no extrinsic information is available from decoder 2,
hence the a priori information is set to zero. Then, the decoder outputs the estimate
of the systematic bit as the log-likelihood ratio (LLR)
 
PðxðnÞ ¼ 1Þjx0 ; p01 ; La ð^xÞ
L1 ð^xðnÞÞ ¼ n ¼ 1; 2; . . .; N ð6:1Þ
PðxðnÞ ¼ 0Þjx0 ; p01 ; La ð^xÞ

It is assumed that the message bits are statistically independent. Thus, the total LLR
is given by

X
N
L1 ð^xÞ ¼ L1 ð^xðnÞÞ ð6:2Þ
n¼1

Hence, the message bits extrinsic information obtained from the first decoder is
given as follows:

Le1 ðxÞ ¼ L1 ðxÞ  La ðxÞ  Lc x0 ð6:3Þ

Fig. 6.19 The iterative decoding turbo decoder block diagram


6.3 Turbo Decoder 177

The term Lc x0 is the information provided by the noisy observation. The extrinsic
information Le1 ðxÞ and x0 are interleaved before applying it as input to the BCJR
algorithm in the second decoder. The noisy parity check bits p02 are also an addi-
tional input to the BCJR algorithm. The extrinsic information obtained from the
BCJR algorithm is de-interleaved to produce the total log-likelihood ratio

N 
X 
PðxðnÞ ¼ 1Þjx0 ; p0 ; Le1 ðxÞ
L2 ðxÞ ¼ 2
ð6:4Þ
n¼1
PðxðnÞ ¼ 0Þjx0 ; p02 ; Le1 ðxÞ

is hard limited to estimating the information bit based only on the sign of the de-
interleaved LLR, at the output of the decoder as expressed by

^x ¼ sgnðL2 ðxÞÞ ð6:5Þ

The extrinsic information

La ðxÞ ¼ L2 ðxÞ  ðLe1 ðxÞ þ Lc x0 Þ ð6:6Þ

is fed back to the decoder 1. The extrinsic information of one decoder is used as the
a priori input to the other decoder, and thus in the turbo decoder iterations, the
extrinsic information ping-ponged back and forth between maximum a posteriori
(MAP) decoders.
After a certain number of iterations, the log-likelihood L2(x) at the output of
decoder 2 is de-interleaved and delivered to the hard decision device, which esti-
mates the input.
If it is assumed that xðnÞ ¼ 1 is transmitted over a Gaussian or fading channel
using BPSK modulation, the probability of the matched filter output yðnÞ is given
by Hanzo et al. [4]
 
1 Eb
PðyðnÞjxðnÞ ¼ þ1Þ ¼ pffiffiffiffiffiffi exp  2 ðyðnÞ  aÞ2 ð6:7aÞ
r 2p 2r

where Eb is the transmitted energy per bit, r2 is the noise variance, and a is the
fading amplitude. Similarly,
 
1 Eb
PðyðnÞjxðnÞ ¼ 1Þ ¼ pffiffiffiffiffiffi exp  2 ðyðnÞ þ aÞ 2
ð6:7bÞ
r 2p 2r
178 6 Turbo Codes

Therefore, when we use BPSK over a (possibly fading) Gaussian channel, the term
Lc x0 ðnÞ can be expressed as follows:
 0 
Pðx ðnÞjxðnÞ ¼ þ1Þ
Lc ðx0 ðnÞjxðnÞÞ ¼ log
Pðx0 ðnÞjxðnÞ ¼ 1Þ
0
1
0 2
exp  2r Eb
2 ðx ðnÞ  aÞ
¼ log@
A
exp  2r 0 2
2 ðx ðnÞ þ aÞ
Eb

    ð6:8Þ
Eb 0 2 Eb 0 2
¼  2 ðx ðnÞ  aÞ   2 ðx ðnÞ þ aÞ
2r 2r
Eb
¼ 2 4a  x0 ðnÞ
2r
¼ Lc x0 ðnÞ

where

Eb
Lc ¼ 4a
2r2

is defined as the channel reliability value.

6.3.1 The BCJR Algorithm

The BCJR algorithm was published in 1974. It is named after its inventors: Bahl,
Cocke, Jelinek, and Raviv. It is for MAP decoding of codes defined on trellises [5].
It was not used in practical implementations for about 20 years due to more
complexity than the Viterbi algorithm. The BCJR algorithm was reborn vigorously
when the turbo code inventors Berrou et al. [1] used a modified version of the BCJR
algorithm in 1993. Consider a trellis section with four states like the one presented
in Fig. 6.20.
In the trellis section, the branches generated by input message bits 1 and −1 are
represented by a dashed line and a solid line, respectively. The variable cðnÞ rep-
resents the branch probabilities at time n, and the variables aðnÞ and bðnÞ are the
forward and backward estimates of the state probabilities at time n based on the past
and future data, respectively. Now the log-likelihood ratios expressed by Eqs. (6.2)
and (6.4) can be computed, using branch probabilities, forward, and backward error
probabilities of the states, as follows:
"P #
as0 ðn  1Þ  cs0 ;s ðnÞ  bs ðnÞ
L1 ð^xÞ ¼ log P R1
0 0
ð6:9Þ
R0 ak1 ðs Þ  ck ðs ; sÞ  bk ðsÞ
6.3 Turbo Decoder 179

Fig. 6.20 Typical trellis section with a; b; c as labels

where s represents the state at time n and s0 stands for the previous state, i.e., the
state at time instant n  1 as in a typical trellis section shown in Fig. 6.20. The R1
indicates the summation computed over all the state transitions from s0 to s due to
message bits xðnÞ ¼ þ1 (i.e., dashed branches). The denominator R0 is the set of all
branches originated by message bits xðnÞ ¼ 1.
For a given state transition, the transmitted signal is the data bit and parity check
bit pair. Also, for a given starting state, the data bit value determines the next state.
Using the Bayes theorem, the branch probability can be expressed as [6]

cs0 ;s ¼ PrðxðnÞÞ Prðy0 ðnÞjyðnÞÞ ð6:10Þ

The probability of the data bit xðnÞ in terms of the a priori probability ratio can be
written as follows:
   
exp 12 La ðxðnÞÞ 1
PrðxðnÞÞ ¼  exp xk La ðxðnÞÞ
1 þ exp½La ðxðnÞÞ 2
 
1
¼ Bn  exp xðnÞLa ðxðnÞÞ ð6:11Þ
2
 
Pr ðxðnÞ ¼ þ1Þ
La ðxðnÞÞ ¼ log ð6:12Þ
Pr ðxðnÞ ¼ 1Þ
180 6 Turbo Codes

The probability of the noisy data bit x0 ðnÞ and parity bits p0 ðnÞ can be expressed in
terms of Gaussian probability distributions as follows:

Prðy0 ðnÞjyðnÞÞ ¼ Prðx0 ðnÞjxðnÞÞ  Prðp0 ðnÞjpðnÞÞ


" #
1 ðx0 ðnÞ  xðnÞÞ2
¼ pffiffiffiffiffiffiffiffiffiffi  exp D
2pr2 2r2
" #
1 ðp0 ðnÞ  pðnÞÞ2
 pffiffiffiffiffiffiffiffiffiffi  exp D
2pr2 2r2
 0 
x ðnÞxðnÞ  p0 ðnÞpðnÞ
¼ An  exp ð6:13Þ
r2

Since cs0 ;s appears in the numerator (where xðnÞ ¼ þ1) and denominator (where
xðnÞ ¼ 1) of Eq. (6.9), the An Bn factor will get canceled as it is independent of
xðnÞ. Thus, the branch probability cs0 ;s ðnÞ can be expressed as
 
1 0 0
cs0 ;s ðnÞ ¼ exp ðxðnÞLa xðnÞ þ xðnÞLc x ðnÞ þ pðnÞLc p ðnÞÞ ð6:14Þ
2

The forward recursion is computed as


X
as ðnÞ ¼ as0 ðn  1Þcs0 ;s ðnÞ ð6:15Þ
s0

The backward recursion is computed as


X
bs0 ðn  1Þ ¼ cs0 ;s ðnÞ  bs ðnÞ ð6:16Þ
s0

a0 ðnÞ ¼ a0 ðn  1Þc0;0 ðnÞ þ a0 ðn  1Þc2;0 ðnÞ ð6:17aÞ

b0 ðnÞ ¼ b1 ðn þ 1Þc0;1 ðn þ 1Þ þ b0 ðn þ 1Þc0;0 ðn þ 1Þ ð6:17bÞ

The recursive calculation of a0 ðnÞ and b0 ðnÞ as in Eqs. (6.17a) and (6.17b) is
illustrated in Fig. 6.21.
The following simple example illustrates the recursive computation of forward
and backward state error probabilities.
Example 6.4 Consider the following trellis diagram shown in Fig. 6.22 with the
following probabilities.

c00 ð1Þ ¼ 0:48; c00 ð2Þ ¼ 0:485; c00 ð3Þ ¼ 0:6;


c01 ð1Þ ¼ 0:09; c01 ð2Þ ¼ 0:1
c10 ð2Þ ¼ 0:1; c10 ð3Þ ¼ 0:7 c11 ð2Þ ¼ 0:485

Compute the forward and backward state error probabilities of the trellis.
6.3 Turbo Decoder 181

Fig. 6.21 Illustration of recursive calculation of a0 ðnÞ and b0 ðnÞ

Fig. 6.22 Trellis diagram of Example 6.4


182 6 Turbo Codes

Solution With the initial value a0 ð0Þ ¼ 1; the forward recursion yields the values

a0 ð1Þ ¼ a0 ð0Þc00 ð1Þ ¼ 0:48


a1 ð1Þ ¼ a0 ð0Þc01 ð1Þ ¼ 0:09
a0 ð2Þ ¼ a0 ð1Þc00 ð2Þ þ a1 ð1Þc10 ð2Þ ¼ 0:2418
a1 ð2Þ ¼ a0 ð1Þc01 ð2Þ þ a1 ð1Þc11 ð2Þ ¼ 0:0916
a0 ð3Þ ¼ a0 ð2Þc00 ð3Þ þ a1 ð2Þc10 ð3Þ ¼ 0:2092

With the initial value b0 ð3Þ ¼ 1; the backward recursion yields the values

b0 ð2Þ ¼ b0 ð3Þc00 ð3Þ ¼ 0:6


b1 ð2Þ ¼ b0 ð3Þc10 ð3Þ ¼ 0:7
b0 ð1Þ ¼ b0 ð2Þc00 ð2Þ þ b1 ð2Þc01 ð2Þ ¼ 0:3510
b1 ð1Þ ¼ b0 ð2Þc10 ð2Þ þ b1 ð2Þc11 ð2Þ ¼ 0:3995
b0 ð0Þ ¼ b0 ð1Þc00 ð1Þ þ b1 ð1Þc01 ð1Þ ¼ 0:2044

6.3.2 Turbo Decoding Illustration

Example 6.5 Assume the channel adds unity variance Gaussian noise to the code
generated by the turbo encoder considered in Example 6.1, decode the received
sequence.
Solution For a random run with unity variance Gaussian noise, the received code is
given as follows:

x0 ðnÞ p01 ðnÞ p02 ðnÞ


0.9039 0.1635 1.2959
−0.3429 −0.2821 2.6317
−1.6952 −0.1377 2.2603
−2.6017 −0.4482 1.5740
−1.4019 1.6934 1.8197
1.7155 2.2967 1.6719
6.3 Turbo Decoder 183

Using the trellis diagram shown in Fig. 6.1, the branch probabilities for the first
stage are computed as

c0;0 ð1Þ ¼ exp x0 ð0Þ  p01 ð0Þ ¼ expð0:9039  0:1635Þ ¼ 0:34390149993822

c0;1 ð1Þ ¼ exp x0 ð0Þ þ p01 ð0Þ ¼ expð0:9039 þ 0:1635Þ ¼ 2:90780935872520

c1;2 ð1Þ ¼ exp x0 ð0Þ  p01 ð0Þ ¼ expð0:9039  0:1635Þ ¼ 2:09677405639736

c1;3 ð1Þ ¼ exp x0 ð0Þ þ p01 ð0Þ ¼ expð0:9039 þ 0:1635Þ ¼ 0:47692310811885

c2;0 ð1Þ ¼ exp x0 ð0Þ þ p01 ð0Þ ¼ expð0:9039 þ 0:1635Þ ¼ 2:90780935872520

c2;1 ð1Þ ¼ exp x0 ð0Þ  p01 ð0Þ ¼ expð0:9039  0:1635Þ ¼ 0:34390149993822

c3;2 ð1Þ ¼ exp x0 ð0Þ þ p01 ð0Þ ¼ expð0:9039 þ 0:1635Þ ¼ 0:47692310811885

c3;3 ð1Þ ¼ exp x0 ð0Þ  p01 ð0Þ ¼ expð0:9039  0:1635Þ ¼ 2:09677405639736

Repeating the branch probabilities computation for other stages of the trellis, the
branch probabilities for all stages of the trellis for this example are given as follows:

n c0;0 ðnÞ=c2;1 ðnÞ c0;1 ðnÞ=c2;0 ðnÞ c1;2 ðnÞ=c3;3 ðnÞ c1;3 ðnÞ=c3;2 ðnÞ
1 0.34390149993822 2.90780935872520 2.09677405639736 0.47692310811885
2 1.86824595743222 0.53526142851899 0.94101142324168 1.06268635566092
3 6.25199116868593 0.15994904231610 0.21066206860156 4.74693905095638
4 21.11323299367156 0.04736366052038 0.11607717585511 8.61495804522538
5 0.74714201360297 1.33843363349046 0.04526143199369 22.0938657031320
6 0.01809354561793 55.26832723205136 0.55922689149115 1.78818296332918

The forward state probabilities are computed using Eq. (6.15), and the resulting
normalized forward state probabilities are given as follows:

n a0 ðnÞ a1 ðnÞ a2 ðnÞ a3 ðnÞ


0 1.00000000000000 0 0 0
1 0.10576017207126 0.89423982792874 0 0
2 0.09657271753492 0.02766854681958 0.41128905912021 0.46446967652529
3 0.11754442548550 0.45413087700815 0.38808965080132 0.04023504670503
4 0.16649906702111 0.54604949205154 0.02659440531342 0.26085703561394
5 0.00875863686136 0.01328728572633 0.31685999430997 0.66109408310235
6 0.89416277694224 0.02500892891120 0.06073869015360 0.02008960399296
184 6 Turbo Codes

The backward state probabilities are computed using Eq. (6.16), the resulting
normalized backward probabilities for this example are given as follows:

n b0 ðnÞ b1 ðnÞ b2 ðnÞ b3 ðnÞ


6 0.45530056303648 0.09512620145755 0.07905302691388 0.37052020859209
5 0.02546276878415 0.09512620145755 0.01580448043695 0.50822864080378
4 0.02462941844490 0.01001105352469 0.96125810870846 0.00410141932195
3 0.00003769540412 0.98178511456649 0.00492923752535 0.01324795250404
2 0.00001104782103 0.00204434600799 0.00001979111732 0.99792481505365
1 0.00032726925280 0 0.99967273074720 0
0 1.00000000000000 0 0 0

Now, using Eq. (6.9), we compute the L1 ðxÞ the LLR from the decoder 1 as
follows:

L1 ðxðnÞÞ
a0 ðn  1Þb1 ðnÞc0;1 ðnÞ þ a1 ðn  1Þb2 ðnÞc1;2 ðnÞ þ a1 ðn  1Þb0 ðnÞc2;0 ðnÞ þ a1 ðn  1Þb3 ðnÞc3;3 ðnÞ
¼
a0 ðn  1Þb0 ðnÞc0;0 ðnÞ þ a1 ðn  1Þb3 ðnÞc1;3 ðnÞ þ a1 ðn  1Þb1 ðnÞc2;1 ðnÞ þ a1 ðn  1Þb2 ðnÞc3;2 ðnÞ
ð6:18Þ

The resulting LLR from the decoder 1 is


2 3 2 3
5:00794986257243 1:80780000000000
6 4:52571043846903 7 6 0:68580000000000 7
6 7 6 7
6 5:03587871714769 7 6 3:39040000000000 7
6
L1 ðxÞ ¼ 6 7 and 6
Lc ðxÞ ¼ 6 7
7 7
6 6:73223129201010 7 6 5:20340000000000 7
4 5:45139857004191 5 4 2:80380000000000 5
11:61281973458293 3:43100000000000

The soft and hard decisions are given as:

L1 ð^xðnÞÞ ^xðnÞ ¼ signðL1 ð^xðnÞÞÞ


5.0079 1
4.5257 1
−5.035 0
−6.732 0
−5.451 0
11.61 1
6.3 Turbo Decoder 185

When compared to the trellis path of decoder 1 shown in Fig. 6.11, it is observed
that the decoder has correctly estimated all data bits. Since a priori information for
the first iteration is zero, the extrinsic information Le1 ðxÞ is given as

Le1 ðxÞ ¼ L1 ðxÞ  Lc ðxÞ


2 3 2 3 2 3
5:00794986257243 1:8078 3:20014986257242
6 4:52571043846903 7 6 0:6858 7 6 5:21151043846903 7
6 7 6 7 6 7
6 7 6 7 6 7
6 5:03587871714769 7 6 3:3904 7 6 1:64547871714763 7
¼66 7 6 7 6 7
76 7¼6 7
6 6:73223129201010 7 6 5:2034 7 6 1:52883129201010 7
6 7 6 7 6 7
4 5:45139857004191 5 4 2:8038 5 4 2:64759857004191 5
11:61281973458293 3:4310 8:18181973458293

The extrinsic information Le1 ðxÞ from the decoder 1 and the noisy information bits
x0 ðnÞ are to be interleaved before feeding as inputs to the BCJR algorithm of the
decoder 2. After relabeling, the following are the inputs to BCJR of the decoder 2.

Le1 ðxÞ x0 ðnÞ p02 ðnÞ


8.18181973458293 1.7155 1.2959
−1.64547871714769 −1.6952 2.6317
−2.64759857004191 −1.4019 2.2603
3.20014986257242 0.9039 1.574
5.21151043846903 −0.3429 1.8197
−1.52883129201010 −2.6017 1.6719

Using the trellis diagram shown in Fig. 6.1, the branch probabilities for the first
stage are computed as

c0;0 ð1Þ ¼ exp 0:5  Le1 ðxÞ  x0 ð0Þ  p01 ð01Þ
¼ expð0:5  8:18181973458293  1:7155  1:2959Þ
¼ 0:00082320123987

c0;1 ð1Þ ¼ exp 0:5  Le1 ðxÞ þ x0 ð0Þ þ p01 ð01Þ
¼ expð0:5  8:18181973458293 þ 1:7155 þ 1:2959Þ
¼ 1214:7697933:438

c1;2 ð1Þ ¼ exp 0:5  Le1 ðxÞ þ x0 ð0Þ  p01 ð01Þ
¼ expð0:5  8:18181973458293 þ 1:7155  1:2959Þ
¼ 90:9681883921071
186 6 Turbo Codes


c1;3 ð1Þ ¼ exp 0:5  Le1 ðxÞ  x0 ð0Þ þ p01 ð01Þ
¼ expð0:5  8:1818197345829  1:7155 þ 1:2959Þ
¼ 0:01099285385007

c2;0 ð1Þ ¼ exp 0:5  Le1 ðxÞ þ x0 ð0Þ þ p01 ð01Þ
¼ expð0:5  8:18181973458293 þ 1:7155 þ 1:2959Þ
¼ 1214:7697933:438

c2;1 ð1Þ ¼ exp 0:5  Le1 ðxÞ  x0 ð0Þ  p01 ð01Þ
¼ expð0:5  8:18181973458293  1:7155  1:2959Þ
¼ 0:00082320123987

c3;2 ð1Þ ¼ exp 0:5  Le1 ðxÞ  x0 ð0Þ þ p01 ð01Þ
¼ expð0:5  8:1818197345829  1:7155 þ 1:2959Þ
¼ 0:01099285385007

c3;3 ð1Þ ¼ exp 0:5  Le1 ðxÞ þ x0 ð0Þ  p01 ð01Þ
¼ expð0:5  8:18181973458293 þ 1:7155  1:2959Þ
¼ 90:9681883921071

Repeating the branch probabilities computation for other stages of the trellis, the
branch probabilities for all stages of the trellis for this example are given as follows:

n c0;0 ðnÞ=c2;1 ðnÞ c0;1 ðnÞ=c2;0 ðnÞ c1;2 ðnÞ=c3;3 ðnÞ c1;3 ðnÞ=c3;2 ðnÞ
1 0.00082320123987 1214.76979330438 90.96818839210695 0.010992853850
2 0.89247155103612 1.12048389536 0.00580149660962 172.369315590337
3 1.59264998322900 0.62788435032 0.00683294655359 146.349747090304
4 0.01694173912378 59.02581740243 2.53444564152908 0.394563601450
5 0.01686431851993 59.29679274132 1.55761408724524 0.642007547433
6 5.44237554167172 0.18374329231 0.00648660728077 154.163795758670
6.3 Turbo Decoder 187

The forward recursion can be calculated using Eq. (6.15). The resulting nor-
malized values are as follows:

n a0 ðnÞ a1 ðnÞ a2 ðnÞ a3 ðnÞ


0 1.00000000000000 0 0 0
1 0.00000067765982 0.99999932234018 0 0
2 0.00000000350858 0.00000000440497 0.00003365622987 0.99996633585658
3 0.00000014443156 0.00000036627375 0.99995279793500 0.00004669135969
4 0.99971058159622 0.00028708385150 0.00000032776038 0.00000200679189
5 0.00028464899198 0.99970462721982 0.00000756282988 0.00000316095832
6 0.00001006025926 0.00000060639718 0.00004523543737 0.99994409790619

The backward recursion can be calculated according to Eq. (6.16). The resulting
normalized values are as follows:

n b0 ðnÞ b1 ðnÞ b2 ðnÞ b3 ðnÞ


6 0.99998080264562 0.00000062587196 0.00001074035175 0.00000783113066
5 0.00001006166024 0.99987703069337 0.00000834425763 0.00010456338877
4 0.00013868974158 0.00143501008710 0.00007080225952 0.99835549791180
3 0.01189148350941 0.00173100215279 0.98500767914419 0.00136983519361
2 0.93003760467044 0.01096095473547 0.03420391063551 0.02479752995857
1 0.01760402234259 0.48239597765741 0.01760402234259 0.48239597765741
0 0.25000000000000 0.25000000000000 0.25000000000000 0.25000000000000

Now, we compute the LLR from the decoder 2 using Eq. (6.18). The resulting
LLR from the decoder 2 is
2 3
25:71127513129983
6 19:85060333367479 7
6 7
6 16:52345972282743 7
L2 ðxÞ ¼ 6
6 12:59341638166602 7
7
6 7
4 11:21364990579842 5
10:0677959239041

By slicing the soft decisions, we get the hard decisions 1 0 0 1 1 0. Comparing this
with the encoder 2 trellis path in Fig. 6.13, it is observed that the decoder has
correctly estimated all data bits.
188 6 Turbo Codes

6.3.2.1 Turbo Decoding Using MATLAB

The following example illustrates the turbo decoding using MATLAB.


Example 6.6 When the code generated by the turbo encoder shown in Fig. 6.9 is
transmitted over an AWGN channel with channel reliability factor L2 ¼ 2,
sequence is received. Then, decode the received sequence using MATLAB.

n x0 ðnÞ p01 ðnÞ p01 ðnÞ


1 3.01 3.13 −1.7
2 −0.23 −1.45 −1.7
3 −0.25 −0.18 1.82
4 0.83 0.91 2.0
5 −0.26 −0.45 −3.0
6 −0.8 1.3 1.46
7 0.43 1.98 2.1
8 −0.74 −0.54 0.3

Solution The following MATLAB program and MATLAB functions are written
and used to decode the received sequence. After the first iteration, the received
sequence becomes as follows:

L1 ð^xðnÞÞ ^xðnÞ ¼ signðL1 ð^xðnÞÞÞ L1 ð^xðnÞÞ ^xðnÞ ¼ signðL1 ð^xðnÞÞÞ


11.3900 1 11.9775 1
3.7217 1 7.1019 1
0.3753 1 −5.1767 0
0.4850 1 −4.5544 0
0.4167 0 4.8071 1
−4.4225 0 −11.0942 0
3.7418 1 5.7408 1
−3.8245 0 −11.3693 0

After the second iteration, the received sequence becomes as follows:

L1 ð^xðnÞÞ ^xðnÞ ¼ signðL1 ð^xðnÞÞÞ L1 ð^xðnÞÞ ^xðnÞ ¼ signðL1 ð^xðnÞÞÞ


20.9509 1 20.9633 1
13.5723 1 28.2036 1
15.2116 0 −23.1493 0
−14.6617 0 −17.3413 0
13.4480 1 21.2953 1
−19.1687 1 −30.6052 0
15.5069 0 18.3074 1
−21.0533 1 −33.4231 0
6.3 Turbo Decoder 189

Thus the transmitted input sequence xðnÞ ¼ ð1 1 0 0 1 0 1 0Þ.

Program 6.2 MATLAB program to decode the received sequence of Example 6.6
190 6 Turbo Codes

MATLAB function trellis.m


6.3 Turbo Decoder 191

MATLAB function turbodec.m


192 6 Turbo Codes

6.3.3 Convergence Behavior of the Turbo Codes

A typical BER curve for a turbo code is shown in Fig. 6.23. Three regions, namely
low Eb =No region, waterfall region, and error floor region, can be identified. In the
low Eb =No region, BER decreases slowly as Eb =No increases. For intermediate
values of Eb =No , the BER decreases rapidly in the waterfall region with an increase
in Eb =No . In this region, the coding gain approaches the theoretical limit. For large
Eb =No , error floor occurs where the performance is dependent on the minimum
Hamming distance of the code. The error floor is due to the weight distribution of
turbo codes. Normally, turbo codes do not have large minimum distances. Hence,
lowering the error floor results in better codes, which in some cases may result in
faster convergence in decoding. One effective way of lowering the error floor is to
use appropriate interleaver.

6.3.4 EXIT Analysis of Turbo Codes

Extrinsic information transfer (EXIT) chart [7] can be used as a tool to aid the
construction of turbo codes. An EXIT chart is the reunion of two curves that
characterize the two decoders used in a turbo decoder. Each curve represents a
relation between the input and the output of one decoder. This relation is the mutual
information between the output of the decoder (Le: the extrinsic information) and
the initial message that was encoded before passing through the channel, with

Fig. 6.23 Typical BER curve of turbo codes


6.3 Turbo Decoder 193

respect to the mutual information between the input of the decoder (La: the a priori
information) and the message:
In a turbo decoder, the extrinsic information of the first decoder (Le1) is used as
the a priori information of the second decoder (La2) and vice versa. It is suggested
in [6] that a priori input to the constituent decoder can be modeled by

La ¼ la  x þ ga ð6:19Þ

where x is the known transmitted systematic bits, ga is the Gaussian noise and
r2
la ¼ 2a .
For each La , the mutual information IA and IE are computed as [6]
Z X  
1 2pA ðej xÞ
IA ¼ pA ðej xÞ log2 de
2 x1 pA ðej x ¼ 1Þ þ pA ðejx ¼ 1Þ ð6:20aÞ
eE
0  IA  1

where pA is the probability density function of La . For Gaussian noise, Eq. (6.20a)
can be rewritten as
Z  2 !
1 r2a log2 ð1 þ ey Þ
IA ¼ 1  exp y  pffiffiffiffiffiffiffiffiffiffi dy ð6:20bÞ
2r2a 2 2pra
Z X  
1 2pE ðej xÞ
IE ¼ pE ðej xÞ log2 de
2 x1
pE ðej x ¼ 1Þ þ pE ðejx ¼ 1Þ ð6:21Þ
eE
0  IE  1

where pE is the probability density function of Le . Viewing IE as a functions of IE


and NEbo , the EXIT characteristics are defined as
 
Eb
IE ¼ T IA ; ð6:22Þ
No

For fixed Eb
No , the above transfer characteristic can be rewritten as follows:

IE ¼ T ðIA Þ ð6:23Þ

Once IA1 and IE1 for decoder 1 and IA2 and IE2 for decoder 2 are obtained using
Eqs. (6.20a and 6.20b) and (6.21), they are drawn on a single chart that is IA1 on the
x axis and IE1 on the y axis for decoder 1, and for decoder 2, IE2 on the x axis and
IA2 on the y axis resulting in EXIT chart for the turbo decoder.
194 6 Turbo Codes

The steps involved in obtaining EXIT curve can be summarized as follows:


1. Specify the turbo code rate R and interested NEbo and determine AWGN No =2:
2. Specify la of interest and determine r2a ¼ 2la .
3. Run the turbo code simulator which yields encoded bits y ¼ ½x p1 p2 ;
4. Find La using Eq. (6.19) and probability density function pA ðej xÞ using a his-
togram of La . Then, find mutual information IA using Eq. (6.20a).
5. Run the BCJR decoder using the model r ¼ y þ g with g ¼ @ð0; r2 Þ
r2 ¼ No =2. In addition to the r, the decoder has Lc ¼ r2r2 ; and La given by
Eq. (6.19).
6. Find Le and then determine the probability density function pE ðej xÞ from the
histogram of Le and calculate mutual information IE using Eq. (6.21).
7. If all values for la of interest are exhausted, then stop and plot IA versus IE .
Otherwise, go to step 2.
The EXIT charts at NEbo ¼ 0:5 dB and 0:2 dB are shown in Fig. 6.24 for rate 1/3
turbo encoder considered in [1].
From Fig. 6.24, it is observed that the curves cross for the EXIT chart at
No ¼ 0:2 dB and the turbo decoder does not converge. Hence, the ensemble
Eb

threshold for rate 1/3 turbo encoder [1] must be at around 0:2 dB.

0.9 + Eb/No = 0.5dB


* Eb/No = -0.2dB
0.8

0.7

0.6
IE1,IA2

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
IA1,IE2

Fig. 6.24 EXIT charts for the rate 1/3 turbo encoder [1] at Eb
No ¼ 0:5 dB and  0:2 dB
6.3 Turbo Decoder 195

0.9 + Eb/No = -4dB

* Eb/No = -5dB
0.8

0.7

0.6
IE1,IA2

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
IA1,IE2

Fig. 6.25 EXIT charts for the rate 1/3 UMTS turbo encoder at Eb
No ¼ 4 dB and  5 dB

The EXIT charts at NEbo ¼ 5 dB and 4 dB are shown in Fig. 6.25 for rate 1/3
UMTS turbo encoder considered in Example 6.2.
From Fig. 6.25, it is observed that the curves cross for the EXIT chart at
Eb
No ¼ 5 dB and the turbo decoder does not converge. Hence, the ensemble
threshold for rate 1/3 UMTS turbo encoder must be at around 5 dB.
For a given code and channel, the decoding correctness of turbo decoder can
check by examining whether the decoders EXIT curves cross. The ensemble
threshold can be estimated by finding the NEbo for which the EXIT curves of the
decoders cross. The speed of the decoder can be obtained from EXIT curves. The
wider the gap between the EXIT curves of the two decoders, fewer the number of
iterations required for convergence.

6.4 Performance Analysis of the Turbo Codes

6.4.1 Upper Bound for the Turbo Codes in AWGN Channel

Assuming that the transmitted data symbols are BPSK modulated which are
coherently demodulated at the receiver, Bit error probability bounds for turbo codes
on AWGN channels can be upper bounded by the union bound [8, 9].
196 6 Turbo Codes

N X
X 1 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
w Eb
BERub  Aðw; dÞ Q 2dR ð6:24Þ
w¼1 d¼df
N No

where Aðw; dÞ is the number of code word of input weight w and the total weight d.
The code’s block size is given by the number of information bits N and the code
rate R. Ignoring the effect of the tail (assuming that the tail length N), we can use
our usual definition of N as being the length of the whole source sequence,
including the tail. Thus, changing the order of summation:
" # rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1 X
X N
w Eb
BERub  Aðw; dÞ Q 2dR
d¼df w¼1
N No
 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð6:25Þ
X1
Eb
 Ad  Q 2dR
d¼d
N o
f

where Ad is the total information weight of all code words of weight d divided by
the number of information bits per code word, as defined by

X
N
w
Ad ¼ Aðw; dÞ ð6:26Þ
w¼1
N

Now, define Nd to be the number of code words of the total weight d and wd to
be their average information weight. Thus,

X
N
Nd  wd ¼ Aðw; d Þ  w ð6:27Þ
w¼1

Nd
Ad ¼ wd ð6:28Þ
N

where NNd is called the effective multiplicity of code words of weight d. Substituting
Eq. (6.28) in Eq. (6.25), we obtain

X
1 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Nd Eb
BERub  wd Q 2dR : ð6:29Þ
d¼df
N No
6.4 Performance Analysis of the Turbo Codes 197

6.4.2 Upper Bound for Turbo Codes in Rayleigh Fading


Channel

In MRC, the receiver weights the incoming signals on antennas by the respective
conjugates of the complex fading random variables. The pair-wise bit error prob-
ability with MRC in a Rayleigh fading channel for BPSK case is given by [10]

Z2 " #Ld
p

1 sin2 h
Pd;MRC ¼ dh ð6:30Þ
p sin2 h þ NEbo
h¼0

Recalling the result in [11]


p
Z2  n n1 
X 
1 sin2 h n nkþ1
dh ¼ p½ Pe  ð1  Pe Þk ð6:31Þ
p sin2 h þ c k¼0
k
h¼0

where
0 vffiffiffiffiffiffiffiffiffiffiffiffi1
u Eb
1@ u N
Pe ¼ 1  t o Eb A
2 1 þ No

Using Eq. (6.31) in Eq. (6.30), in closed form, we obtain

X
Ld1
Ld  1 þ k

Pd;MRC ¼ ½Pe Ld ð1  Pe Þk ð6:32Þ
k¼0
k

Then, the upper bound on the BER performance of turbo codes in Rayleigh
fading channel with MRC diversity can be expressed as

X
1
BERRayleigh;MRC  Ad Pd;MRC ð6:33Þ
d¼df

If there is no diversity, i.e., L = 1, the upper bound on the BER performance of


turbo codes in Rayleigh fading channel can be expressed as

X
1
BERRayleigh  Ad Pd ð6:34Þ
d¼df
198 6 Turbo Codes

Fig. 6.26 RSC encoder of Example 6.7

where

d 1 
X 
d d1þk
Pd ¼ ½Pe  ð1  Pe Þk
k¼0
k

Example 6.7 Consider a turbo encoder using the following RSC encoder shown in
Fig. 6.26 with free distance 5, plot the upper bound BER versus NEbo performance of
the turbo encoder for interleaver length of 100 in AWGN, Rayleigh fading channel
with MRC diversity for L = 2.
Solution The set of coefficients Ad used to compute the bound for interleaver
length of 100 as quoted in [8] are given as follows:

d Ad d Ad
8 0.039881 22 33.31
9 0.079605 23 54.65
10 0.1136 24 91.23
11 0.1508 25 154.9
12 0.1986 26 265.5
13 0.2756 27 455.6
14 0.4079 28 779
15 0.6292 29 1327
16 1.197 30 2257
17 2.359 31 3842
18 4.383 32 6556
19 7.599 33 11221
20 12.58 34 19261
21 20.46 35 33143

The following MATLAB program is written and used to plot the NEbo versus upper
bound BER Performance of the turbo encoder with an interleaver length of 100
(Fig. 6.27).
6.4 Performance Analysis of the Turbo Codes 199

Eb
Program 6.3 MATLAB program to compute upper bound BER for different No
200 6 Turbo Codes

0
10
Gaussian
Rayleigh
MRC, N =2
r

-5
10
bound
BER

-10
10

-15
10
0 1 2 3 4 5 6 7 8 9 10
E /N (dB)
b o

Fig. 6.27 Upper bound BER performance of turbo encoder of Example 6.7 with interleaver length
of 100

6.4.3 Effect of Free Distance on the Performance


of the Turbo Codes

The BER performance of turbo codes for ML decoding is upper bounded by


Eq. (6.29). Since the BER performance of the code is dominated by the free
distance term (for d ¼ df ) for moderate and high SNRs, for AWGN channel,
Eq. (6.29) can be written as given below [12]
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Nf Eb
BERdf ;AWGN wf   Q 2  df  R  ð6:35Þ
N No

where Nf and wf correspond to Nd and wd for d ¼ df .


6.4 Performance Analysis of the Turbo Codes 201

-2
10

-3
10

-4
10

-5
10
BER

-6
10

-7
10
pseudorandom, AWGN
-8
10 pseudorandom, Rayleigh fading
rectangular, AWGN
rectangular, Rayleigh fading
-9
10
0 0.5 1 1.5 2 2.5 3 3.5 4
E /N (dB)
b o

Fig. 6.28 Free distance asymptotes for turbo codes in AWGN and Rayleigh fading channels with
two different interleavers

0 0 vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi11k
df 1   u Eb
Nf X df  1 þ k @ 1 u R
BERdf ;Rayleigh wf   1  @1  t No Eb AA
N k¼0 k 2 1þN R o

ð6:36Þ

For a turbo code considered in [13], the BER performance is evaluated in


AWGN and Rayleigh fading channels with a pseudo-random interleave of length
with 65536 with Nf ¼ 3; df ¼ 6; wf ¼ 2 and a 120  120 rectangular window
with Nf ¼ 28;900; df ¼ 12; wf ¼ 4.
The following MATLAB program is written and used to evaluate the perfor-
mance in AWGN and Rayleigh fading channels.
202 6 Turbo Codes

Program 6.4 MATLAB Program for free distance asymptotes in AWGN and
Rayleigh fading channels for two different interleavers
6.4 Performance Analysis of the Turbo Codes 203

The free distance asymptotes for pseudo-random and rectangular interleavers in


AWGN and Rayleigh fading channels obtained from the above MATLAB program
are shown in Fig. 6.28.
From Fig. 6.28, it can be observed that the rectangular window exhibits rela-
tively poor performance. It is due to the fact that the rectangular window has large
effective multiplicity as compared to the effective multiplicity of the pseudo-random
interleaver.

6.4.4 Effect of Number of Iterations on the Performance


of the Turbo Codes

The BER performance of UMTS turbo codes for frame length of 40 is shown in
Fig. 6.29. It can be seen that as the number of iterations increases, there is a
significant improvement in BER performance. However, for certain number itera-
tions, no improvement can be observed. For complexity reasons, in turbo decoding,
4–10 iterations are used.

0
10

-1
10

-2
10
BER

iteration 1

-3
10 iteration 2

-4 iteration 4
10

-5
10
-6 -5 -4 -3 -2 -1 0 1 2
Eb/No(in dB)

Fig. 6.29 Effect of number of iterations on the BER performance of turbo codes
204 6 Turbo Codes

6.4.5 Effect of Puncturing on the Performance of the Turbo


Codes

The BER performance comparison of the unpunctured and the punctured turbo
codes is shown in Fig. 6.30. For this, a turbo encoder is considered that uses RSC
encoders with the generating function.
h i
G¼ 1 1þD2
1þDþD2

A random interleaver of length 1,000 with odd–even separation is used. An


AWGN channel with BPSK modulation assumed. In the decoding, Log BCJR
algorithm for 3 iterations is used.
From Fig. 6.30, it can be observed that the unpunctured turbo codes give a gain
of about 0.6 dB over the punctured turbo codes.

0
10
1/3 rate(without puncturing)
1/2 rate(with puncturing)
-1
10

-2
10
BER

-3
10

-4
10

-5
10
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
E /N (dB)
b o

Fig. 6.30 Effect of puncturing on the BER performance of turbo codes


6.5 Problems 205

6.5 Problems

1. For the encoder shown in Fig. 6.31


(a) Find the impulse response
(b) Draw the state diagram
(c) Obtain its equivalent recursive encoder
(d) Find the impulse response of the recursive encoder obtained
(e) Draw the state diagram of the recursive encoder obtained

2. Draw the equivalent RSC encoder of the convolutional encoder with generator
sequences g1 ¼ ½1 1 1 1 1; g1 ¼ ½1 0 0 0 1
3. For the turbo encoder shown in Fig. 6.14, find the code word for the input
sequence x ¼ f1 1 0 1 0 1 0g. Let the interleaver be f5 3 4 0 6 2 1g
4. For the turbo encoder shown in Fig. 6.9, find the code word for the input
sequence x ¼ f1 1 0 0 1 0 1 0g. Let the interleaver be f7 5 1 2 4 3 6 0g
5. Consider the CDMA2000 standard encoder shown in Fig. 6.32. Find the code
word for the input sequence x ¼ f1 0 1 1 0 0g assuming that the encoders trellis
is terminated. Let the interleaver be f0 3 1 5 2 4g.
6. Decode the following received sequence when the turbo code generated in
Example 6.2 was transmitted over an AWGN channel with unity noise variance

x0 ðnÞ p01 ðnÞ p01 ðnÞ


1.3209 2.4883 −1.3369
0.8367 −2.5583 −3.0404
−1.8629 −0.6555 −1.6638
−2.6356 0.3157 0.1266
−1.0138 0.999 −1.3059
1.4864 0.6762 −1.0409
−2.262 −0.7018 −0.5412

Fig. 6.31 Non-recursive systematic convolutional encoder


206 6 Turbo Codes

Fig. 6.32 CDMA2000 standard turbo encoder

6.6 MATLAB Exercises

1. Write a MATLAB program to construct an EXIT chart for turbo codes.


2. Write a MATLAB program to simulate the performance of unpunctured and
punctured turbo codes.

References

1. Berrou, C., Glavieux, A., Thitimajshima, P.: Near Shannon limit error-correcting coding and
decoding: turbo-codes. In: Proceedings of ICC 1993, Geneva, Switzerland, pp. 1064–1070
(1993)
2. Berrou, C., Glavieux, A.: Near optimum error correcting coding and decoding: turbo-codes.
IEEE Trans. Commun. 44(10), 1261–1271 (1996)
3. Jung, P., Nasshan, M.: Performance evaluation of turbo codes for short frame transmission
systems. Electron. Lett. 30(2), 111–113 (1994)
4. Hanzo, L., Liew, T.H., Yeap, B.L.: Turbo Coding, Turbo Equalisation and Space Time
Coding for Transmission Over Fading Channels. IEEE Press, Wiley Ltd., Hoboken (2002)
References 207

5. Bahl, L., Cocke, J., Jelinek, F., Raviv, J.: Optimal decoding of linear codes for minimizing
symbol Error rate. IEEE Trans. Inf. Theor. 20, 284–287 (1974)
6. ten Brink, S.: Convergence behavior of iteratively decoded parallel concatenated codes. IEEE
Trans. Commun. 49(10), 1727–1737 (2001)
7. Ryan, W.E., Lin, S.: Modern Codes: Classical and Modern. Cambridge University Press,
Cambridge (2009)
8. Benedetto, S., Montorsi, G.: Unveiling turbo codes: some results on parallel concatenated
coding schemes. IEEE Trans. Info. Theor. 42, 409–429 (1996)
9. Divsalar, D., Dolinar, S., McEliece, R.J., Pollara, F.: Transfer function bounds on the
performance of turbo codes. TDA progress report 42-122, JPL, Caltech, August 1995
10. Ramesh, A., Chockalingam, A., Milstein, L.B.: Performance analysis of turbo codes on
Nakagami fading channels with diversity combining. WRL-IISc-TR-108, Wireless Research
Lab Technical Report, Indian Institute of Science, Bangalore, Jan 2001
11. Goldsmith, A., Alouini, M.-S.: A unified approach for calculating error rates of linearly
modulated signals over generalized fading channels. IEEE Trans. Commun. 47, 1324–1334
(1999)
12. Du, K.-L., Swamy, M.N.S.: Wireless Communications: Communication Systems From RF
Subsystems to 4G Enabling Technologies. Cambridge University Press, Cambridge (2010)
13. Schlegel, C.B., Perez, L.C.: Trellis and Turbo Coding. IEEE Press, Piscataway (2004)
Chapter 7
Bandwidth Efficient Coded Modulation

The block codes, convolutional, and turbo codes discussed in the previous chapters
achieve performance improvement expanding the bandwidth of the transmitted
signal. However, when coding is tp being applied to bandwidth limited channels,
coding gain is to be achieved without signal bandwidth expansion. The coding gain
for bandwidth limited channels can be achieved by a scheme called trellis coded
modulation (TCM). The TCM is a combined coding and modulation technique that
increases the number of signals over the corresponding uncoded system to com-
pensate for the redundancy introduced by the code for digital transmission over
band-limited channels. The term “trellis” is due to the fact that the trellis diagram
for the TCM schemes is similar to the trellis diagrams of binary convolutional
codes. In TCM schemes, the trellis branches are labeled with redundant non-binary
modulation signals rather than with binary code symbols. The TCM schemes
employ multilevel amplitude and phase modulation, such as PAM, PSK, DPSK, or
QAM, in combination with a finite-state encoder which governs the selection of
modulation signals to generate coded signal sequences. In the receiver, the received
noisy signals are decoded using soft-decision Viterbi or BCJR decoder.
In the TCM, the “free distance” (minimum Euclidean distance) between the
coded modulation signals exceeds the minimum distance between the uncoded
modulation signals, at the same information rate, bandwidth, and signal power.
The basic principle of the TCM and further descriptions of it were published in
[1–5]; the TCM has seen rapid transition from the research to the practical use in
1984, when the international telegraph and telephone consultative committee
(CCITT) has adopted the TCM scheme with a coding gain of 4 dB for use in the
high-speed voice band modems for 9.6/12.4 kbps standard [4, 6, 7].
The main idea in the TCM is to devise an effective method that perform mapping
of the coded bits into the signal symbols so as to maximize the free distance between
coded signal sequences. A method based on the principle of mapping by set parti-
tioning was developed by Ungerboeck in [1]. This chapter describes the classical

Electronic supplementary material The online version of this chapter (doi:10.1007/978-81-


322-2292-7_7) contains supplementary material, which is available to authorized users.

© Springer India 2015 209


K. Deergha Rao, Channel Coding Techniques for Wireless Communications,
DOI 10.1007/978-81-322-2292-7_7
210 7 Bandwidth Efficient Coded Modulation

bandwidth efficient TCM, turbo TCM (TTCM), bit-interleaved coded modulation


(BICM), bit-interleaved coded modulation iterative decoding (BICM-ID), and
comparison of their BER performance.

7.1 Set Partitioning

Set partitioning divides a signal set into the smaller sets with maximally increasing
smallest intra-set distances. Finally, the obtained small signal constellations will be
referred to as the “subsets.” Every constellation point is used only once, and if the
subsets are used with equal probability, then the constellation points all appear with
equal probability. The following two examples illustrate the set partitioning. The
signal constellation is partitioned into the subsets that Euclidean minimum distance
between signal symbols in a subset is increased with the each partition.
Example 7.1 Set partitioning of 4-PSK signal Euclidian distance in a signal con-
stellation is the distance between different points in the constellation diagram with
respect to reference point.
The 4-PSK signal constellation shown in Fig. 7.1 is partitioned as shown in
Fig. 7.2. In the 4-PSK signal set, the signal symbols are located on a circle of radius
  pffiffiffi
1 and having a minimum distance separation of D0 ¼ 2sin p4 ¼ 2.
Finally, the last stage of the partition leads to 4 subsets and each subset contains
a single signal symbol.
Example 7.2 Set partitioning of 8-PSK signal The 8-PSK signal constellation
shown in Fig. 7.3 is partitioned as shown in Fig. 7.4. In the 8-PSK signal set, the
signal symbols are locatedpon a circle of radius 1 and having a minimum distance
separation of D0 ¼ 2sin 8 ¼ 0:765. The eight symbols are subdivided into two
subsets of four symbols each in the first partition with the minimum distance
  pffiffiffi
between two symbols increases to D1 ¼ 2sin p4 ¼ 2.
Finally, the last stage of the partition leads to 4 subsets and each subset contains
a single signal symbol.

1 Δ =2sin =

2 0

Δ =2sin =2
3

Fig. 7.1 Signal constellation diagram for the 4-PSK


7.2 Design of the TCM Scheme 211

2 0

2 0

3
1

0 2

Fig. 7.2 4-PSK set partitioning

Δ =2sin =√
2
3 1
Δ =2sin = 0.765
4 0

5 7
6
Δ =2sin =2

Fig. 7.3 Signal constellation diagram for the 8-PSK

7.2 Design of the TCM Scheme

The general structure of a TCM scheme is shown in Fig. 7.5. In an operation,


 
k information bits are transmitted. The ~k ~k\k bits are encoded by binary con-
volutional encoder. The encoder output bits are used to select one of the possible
subsets in the partitioned signal set—partition of the signal—while the remaining
212 7 Bandwidth Efficient Coded Modulation

Δ =2sin =0.765

Δ =2sin =√ 2

Δ =2sin =2

Fig. 7.4 8-PSK set partitioning


7.2 Design of the TCM Scheme 213

k . Signal
. Selection Signal
within Symbol
.
k Partition

.
. Rate Signal
. k /( k +1) Partition
2 Convolutional Selection
Encoder
1

Fig. 7.5 General structure of a TCM scheme

k  ~k bits are used to select one of 2mm~ signal symbols in each subset. When
~k ¼ k, all the k information bits are encoded.
In the encoder short designing, Ungerboeck summarized the following rules that
were to be applied to the assigned channel signals.
1. Transmission originating, or merging into any of the same state should receive
signals from the subsets having maximum Euclidean distance between them.
2. Parallel state transitions are assigned the signal symbols separated by the largest
Euclidean distance.
3. All the subsets are to be used with equal probability in trellis diagram.
The following examples illustrate the design of different TCM encoders.
Example 7.3 2-state 4-PSK TCM Encoder A simple 2-state 4-PSK TCM encoder is
shown in Fig. 7.6a. In this encoder, a rate 1=2 convolutional encoder is used in
which both the information bits are encoded. The output of the convolutional
encoder is used to select from among the second level partitions of 4-PSK, wherein
each partition contains only a single signal. Thus, it does not require an uncoded bit
to complete the signal selection process. The two-state trellis diagram of the 4-PSK
TCM encoder is shown in Fig. 7.6b, which has no parallel transitions.
The signal flow graph of the trellis diagram of Fig. 7.6b is shown in Fig. 7.7.
Now, the transfer function can be obtained by using the signal flow graph tech-
niques and Mason’s formula. In the graph, the branch labels superscripts indicate
the weight (SED) of the corresponding symbol of the transition branch in the trellis
diagram.
214 7 Bandwidth Efficient Coded Modulation

4-PSK Constellation
Mapping
(a)
0011
Input −1 0101
0123 Output
Symbol number

(b)
3
S1 S1

S0 S0
0

Fig. 7.6 a 2-State QPSK TCM encoder. b 2-state QPSK TCM encoder trellis diagram

Y2

Y4 Y2
S0 S1 S0

Fig. 7.7 Signal flow graph of the trellis shown in Fig. 7.6b

By using reduction techniques, the above signal flow graph can be simplified as
follows:
7.2 Design of the TCM Scheme 215

Thus, the transfer function is given by

Example 7.4 4-State 8-PSK TCM Encoder The Ungerboeck 4-state 8-PSK TCM
encoder is shown in Fig. 7.8a. In this encoder, a rate 1/2 convolutional encoder
partitions the 8-PSK constellation into four subconstellations {(0, 4), (1, 5), (2, 6),
(3, 7)}. The unique two bit output from the convolutional encoder corresponds to a
label assigned to each subconstellation. The output of the convolutional encoder
selects one of the subconstellations, and the uncoded bit selects one of the two
signals in the selected subconstellation.

The four-state trellis diagram of the TCM encoder is shown in Fig. 7.8b. In the
trellis diagram, the states correspond to the contents of the memory elements in the
convolutional encoder of the TCM encoder. The branch labels are the signals
selected from the partitioned subconstellations for transmission associated with the
given state transition. For example, if the convolutional encoder has to move from
state S0 to S1, then only signal 2 or 6 from subconstellation (2, 6) only may be
selected for transmission.
The signal flow graph of the trellis diagram of Fig. 7.8b is shown in Fig. 7.8c.
The transfer function can be obtained by using the signal flow graph techniques and
Mason’s formula.
The various distinct squared intersignal distances are as follows:
p
D0;1 ¼ 2sin ¼ 0:7654; D2 ð0; 1Þ ¼ D2 ð000; 001Þ ¼ 0:586
8
 p
D0;2 ¼ 2sin 2  ¼ 1:4142; D2 ð0; 2Þ ¼ D2 ð000; 010Þ ¼ 2:000
 p 8
D0;3 ¼ 2sin 3  ¼ 1:8478; D2 ð0; 3Þ ¼ D2 ð000; 011Þ ¼ 3:414
 p 8
D0;4 ¼ 2sin 4  ¼ 2:0000; D2 ð0; 4Þ ¼ D2 ð000; 100Þ ¼ 4:000
8

By using the signal flow graph reduction techniques and Mason’s formula, we
obtain the following transfer function.

ðY 4:586 þ Y 7:414 Þ
TðYÞ ¼ 4
1 2Y 0:586  2Y 3:414  Y 4:586  Y 7:414

Example 7.5 8-State 8-PSK TCM Encoder The Ungerboeck 8-state 8-PSK TCM
encoder is shown in Fig. 7.9a. In this encoder, a rate 2=3 convolutional encoder is
used in which the both information bits are encoded. The output of the
216 7 Bandwidth Efficient Coded Modulation

8-PSK Constellation
Mapping
(a)
00001111

z-1 z-1 00110011

01010101
01234567
Symbol number

(b)
S3 5 S3
1

7
7 3
S2 S2
3
5
1

4
0
S1 S1

6
2 6
2
S0 4 S0
0

(c) Y0.586 +Y3.414

S3

4 Y0.586 +Y3.414
Y0.586 +Y3.41

Y0.586 +Y3.414
2Y2
S0 S1 S2
2Y2 S1
1+Y4

Fig. 7.8 a 4-State 8-PSK TCM encoder. b 4-state 8-PSK TCM encoder trellis. c Signal flow graph
of the trellis shown in (b)
7.2 Design of the TCM Scheme 217

8-PSK
Constellations
(a) Mapping
x (n) z-1 00001111

z-1 z-1 00110011

01010101
01234567
Symbol number

Fig. 7.9 a 8-state 8-PSK TCM encoder. b 8-state 8-PSK TCM encoder trellis

convolutional encoder is used to select from among the third level partitions of
8-PSK, wherein each partition contains only a single signal. Thus, it does not
require an uncoded bit to complete the signal selection process.

The 8-state trellis diagram of the 8-PSK TCM encoder is shown in Fig. 7.9b,
which has no parallel transitions.

7.3 Decoding TCM

In general, a subconstellation of the signals is assigned to each branch in the TCM


trellis. The decoding of the TCM is performed using the soft-decision Viterbi
algorithm in two steps.
1. Determine the best signal point within each subset by comparing the received
signal to each of the signals allowed for a branch. The signal closest in distance
to the received signal is considered as the best signal point and the corre-
sponding branch metric is proportional to the distance between the best signal
subset signal point and the received signal.
2. The signal point is selected from each subset and its squared distance is the
signal path through the code trellis that has the minimum sum of squared
distances from the received sequence.
218 7 Bandwidth Efficient Coded Modulation

(b)
16
7351 S7 S7
56

36

6240 S6 76 S6

3715 S5 S5

2604 S4 S4

5173 S3 S3

4062 S2 S2

1537 S1 66 66 S1

26 26

46 46

0426 S0 06 06 S0

Fig. 7.9 (continued)


7.4 TCM Performance Analysis 219

7.4 TCM Performance Analysis

The performance of a TCM scheme can be evaluated by the following performance


measures.

7.4.1 Asymptotic Coding Gain

The coded system performance improvement relative to the uncoded system is


measured in terms of asymptotic coding gain. The asymptotic coding gain is
defined as follows:
  !
Euncoded df2=coded
Asymtotic coding gain ¼ ð7:1Þ
Ecoded df2=uncoded

where
Euncoded is the normalized average received energy of an uncoded system,
Ecoded is the normalized average received energy of the coded system,
df2=uncoded is the squared minimum free distance of an uncoded system, and
df2=coded is the squared minimum free distance of the coded system

7.4.2 Bit Error Rate

A general lower bound for BER in an AWGN channel is given as follows:


0sffiffiffiffiffiffiffiffiffi1
df2 Es
WQ@ A ð7:2Þ
2N0

The distance structure is independent of the transmitted sequence for the uniform
TCM and

W¼1

A closed form upper bound on BER can be expressed by

BERUB ¼ TðYÞjY¼expðES =4N0 Þ ð7:3Þ


220 7 Bandwidth Efficient Coded Modulation

-7 -5 -3 -1 1 3 5 7

0011

z-1 0101
0123
Symbol number

Fig. 7.10 2-state 8-AM TCM encoder

or in a tighter form as
0sffiffiffiffiffiffiffiffiffi1 !
df2 Es df2 Es
BERUB ¼ Q@ A  exp TðYÞjY¼expðES =4N0 Þ ð7:4Þ
2N0 4N0

where TðYÞ is the transfer function.


The following example illustrates the performance of a TCM scheme.
Example 7.6 Consider the following 2-state encoder and the 8-AM constellation to
construct a TCM scheme that provides 2 bits/sec/Hz. Determine the asymptotic
coding gain for the TCM relative to the uncoded 4-AM system
-7 -5 -3 -1 1 3 5 7 (Fig. 7.10).

Solution

Label 0 1 2 3 4 5 6 7

For a 2m -ary AM constellation with the same minimum free distance as BPSK, the
normalized average energy is given by average signal energy

ð4m  1Þ ð43  1Þ 63
E¼ ¼ ¼ ¼ 21
m 3 3

or

1 2 
¼ 1 þ 32 þ 52 þ 72 þ ð1Þ2 þð3Þ2 þð5Þ2 þð7Þ2 ¼ 21
8
7.4 TCM Performance Analysis 221

A0

B0 B1
(-7,-3,1,5) (-5,-1,3,7)

C0 C2 C1 C3
(-7,1) (-3,5) (-5,3) (-1,7)
(0,4) (2,6) (1,5) (3,7)

Fig. 7.11 8-AM set partitioning

8-AM Set partitioning (Fig. 7.11)

Trellis Diagram

C3
S1 S1
C1
C2

S0 S0
C0

df =uncoded ¼ 2
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi
df =uncoded ¼ D20 þ D21 ¼ 22 þ 42 ¼ 20

since N-AM signal sets results in

ð 4m  1Þ ð 42  1Þ
Euncoded ¼ ¼ ¼5
3 3 !
 
Euncoded df2=coded 5 20
coding gain ¼ 2
¼ ¼ 1:19  0:76 dB
Ecoded df =uncoded 21 4
222 7 Bandwidth Efficient Coded Modulation

Example 7.7 Evaluate coding gain and the BER performance of a 4-state 4-PSK
TCM with the following Trellis diagram.
The signal flow graph of the trellis diagram of Fig. 7.12 is shown in Fig. 7.13.
Now, by using the signal flow graph reduction techniques and Mason’s formula, the
transfer function can be obtained.

1
S3 S3
3 3

S2 S2
1

S1 S1

2 2

S0 S0
0

Fig. 7.12 Trellis diagram

S3

S0 S1 S2 S0

Fig. 7.13 Signal flow graph


7.4 TCM Performance Analysis 223

By using reduction techniques, the above signal flow graph can be simplified as
given below

1
4
Further, the parallel branches with gains Y 2 and 1Y
Y
2 can be combined as a single

branch with gain


Y4 Y2  Y4 þ Y4 Y2
Y2 þ ¼ ¼ as follows:
1  Y2 1  Y2 1  Y2

Further, the loop can be replaced by a branch with gain

Y2
Y2
1Y 2
¼ as follows:
1  1Y
Y2
2 1  2Y 2
224 7 Bandwidth Efficient Coded Modulation

2 1
2
3 3

0
4
1

0
2
2

2 2

4
0 0

Fig. 7.14 Computation of df

Thus, the transfer function is given by

Y2 Y 10
TðYÞ ¼ Y 4 Y 4
¼
1  2Y 2 1  2Y 2

Computation of df (Fig. 7.14)


Since there are np parallel transitions in this trellis, only non-parallel paths are to be
examined to determine minimum free distance of the code. At state S0 , the symbol
path 2 is chosen with the SED of 4, from there it leads us to state 1. From state S1 ,
the symbol path 1 with the SED 2 is taken which takes to state S2 . From state S2 , we
return to state S0 via the symbol path 2 with SED of 4. There is no other path that
can take us back to state S0 with a smaller total SED.
Hence,

df2=uncoded ¼ sum of the SEDs of the paths shown in bold


¼ 4 þ 2 þ 4 ¼ 10:

Asymptotic coding gain of the 4-state 4-PSK TCM

-1 0 1
7.4 TCM Performance Analysis 225

Since BPSK constellation is with antipodal signals +1 and −1 as shown in above


figure. Thus

df2=uncoded ¼ 4

Hence, the asymptotic coding gain is given by


  !
Euncoded df2=coded 10
Asymtotic coding gain ¼ ¼ ¼ 2:5
Ecoded df2=uncoded 4

BER Performance of the 4-state 4-PSK TCM


The transfer function bounded for the BER for the 4-state 4-PSK TCM in an
AWGN channel from Eq. (7.3) is given by

Y 10
BER ¼ TðYÞjY¼expðEb =4N0 Þ ¼
1  2Y 2 Y¼expðEb =4N0 Þ
expð10Eb =4N0 Þ
¼
1  2expð2Eb =4N0 Þ

The distance structure is independent of transmitted sequence for the uniform


TCM and

w¼1

Since df2 ¼ 10 for the 4-state 4-PSK TCM, the lower bound for BER from
Eq. (7.2) can be written as
sffiffiffiffiffiffiffiffi!
5Eb
BERLB ¼Q
N0

The following MATLAB program illustrates the BER performance of 4-state


4-PSK TCM in comparison with uncoded BPSK (Fig. 7.15).
226 7 Bandwidth Efficient Coded Modulation

0
10
Uncoded BPSK
-2
4-state QPSK TCM
10 Lower bound

-4
10

-6
10
BER

-8
10

-10
10

-12
10

-14
10
3 4 5 6 7 8 9 10
Eb/No (dB)

Fig. 7.15 BER performance comparison

Program 7.1 MATLAB program for BER performance of 4-state 4-PSK TCM

clear all;clc;
Eb_N0_dB=[3:1:10];
EbN0Lin = 10.^(Eb_N0_dB/10);
BER_BPSK_AWGN = 0.5* erfc ( sqrt( EbN0Lin ) ) ;
BER_QPSK_LB = 0.5* erfc ( sqrt(2.5* EbN0Lin ) ) ;
BER_QPSK =exp(-2.5*EbN0Lin)./(1-2*exp(-0.5*EbN0Lin )) ;
semilogy(Eb_N0_dB,BER_BPSK_AWGN,'-+')
hold on
semilogy(Eb_N0_dB,BER_QPSK ,'-')
semilogy(Eb_N0_dB,BER_QPSK_LB,'--')
legend('Uncoded BPSK ','4-state QPSK TCM ','Lower bound');
xlabel('Eb/No (dB)');
ylabel('BER');
7.4 TCM Performance Analysis 227

5
3.414
0.586 1
0.586
3.414 7
7 3
4
0 3
2 5
2 1
5
1

2
0.586 4
3.414 0
3.414
0.586
6
2 2 6
2
2 4
4
0 0 0
0

Fig. 7.16 Trellis diagram

Example 7.8 Evaluate coding gain of a 4-state 8-PSK TCM scheme of Example 7.7
Solution
Computation of df
In the Trellis diagram shown in Fig. 7.16, symbols originating from a state are
replaced with their SEDs.
Since there are np parallel transitions in this trellis, both the parallel and the non-
parallel transitions are to be examined to determine the minimum free distance of
the code. The minimum free distance for the parallel transitions is the minimum free
distance for the signals of the partition in the parallel transitions. For this encoder,
the minimum free distance for the parallel transitions is the minimum free distance
among {(0, 4), (1, 5), (2, 6), (3, 7)}.
Hence,

df =parallel ¼ 2

To compute df =parallel , the minimum distance path is found by following from


each state the path with the smallest squared distance but not 0. At state S0 , the
symbol path 2 is chosen as it has the SED of 2, from there it leads us to state 1.
228 7 Bandwidth Efficient Coded Modulation

From state 1, the symbol path 1 with the SED of 0.586 is taken which takes to state
S0 via the symbol path 2 with SED of 2.
There is no other path that can take us back to state S0 with a smaller total SED.
Thus, the total minimum squared Euclidean distance (MSED) is 2 þ 0:586 þ 2 ¼
pffiffiffiffiffiffiffiffiffiffiffi
4:586 and hence the df =nonparallel ¼ 4:586 ¼ 2:14. The minimum free distance for
the TCM encoder is the minimum of df =parallel and df =nonparallel . Thus, the
 
df =coded ¼ min df =parallel ; df =nonparallel ¼ minð2; 2:14Þ ¼ 2:
pffiffiffi
The minimum free distance for the uncoded 4-PSK is 2 and so the df2 ¼ 2 for
the uncoded 4-PSK. Therefore, the asymptotic coding gain for the 4-state 8-PSK
TCM is given by
!
df2=coded 4
coding gain ¼ 10 log10 ¼ 10 log10 ¼ 3:01 dB
df2=uncoded 2

Example 7.9 Evaluate coding gain of the 8-state 8-PSK TCM scheme of Example 7.5.
Solution
Computation of df
In the Trellis diagram shown in Fig. 7.17, symbols originating from a state are
replaced with their SEDs. Ungerboeck encoder of Example 7.8, we have to com-
pute df of this code in order to determine the asymptotic coding gain. The minimum
distance path is found by following from each state the path with the smallest
squared distance but not 0. At state S0 , the symbol path 6 is chosen as it has the
SED of 2, from there it leads us to state S3 . From state S3 , the symbol path 7 with
the SED of 0.586 is taken which takes to state S6 : From state S6 , we return to state
S0 via the symbol path 6 with SED of 2.
There is no other path that can take us back to state S0 with a smaller total SED.
Thus, the total minimum squared Euclidean distance (MSED) is 2 + 0.586 + 2 = 4.586,
and hence, the df2 ¼ 4:586 for the coded system. The minimum free distance for
pffiffiffi
uncoded 4-PSK is 2 and so the dfree 2
¼ 2 for the uncoded 4-PSK. Therefore, the
asymptotic coding gain for the 8-state 8-PSK TCM is given by
!
df2=coded 4:586
coding gain ¼ 10 log10 ¼ 10 log10 ¼ 3:6 dB
df2=uncoded 2
7.4 TCM Performance Analysis 229

0.586 1
3.414
5
3.414
0.586 3
0 7
4
2
2
3.414
0.586
7 6
0.586
3.414

4
0
2
2
3.414
0.586
0.586
3.414

2
0
2
4

0.586 6
3.414 6
2
3.414 6
0.586
2
4
4
2
2 0 0
4
0

Fig. 7.17 Trellis diagram for Ungerboeck encoder shown in Fig. 7.16 replacing symbols with
their SEDs
230 7 Bandwidth Efficient Coded Modulation

7.4.3 Simulation of the BER Performance of a 8-State 8-PSK


TCM in the AWGN and Rayleigh Fading Channels
Using MATLAB

The following MATLAB Program 7.2 and MATLAB functions given in Appendix
A are used to simulate the BER performance of Ungerboeck 8-State 8-PSK TCM of
Fig. 7.18 in both the AWGN and Rayleigh fading channels.

-1
10

-2
10

Uncoded QPSK Rayleigh

Uncoded QPSK AWGN


-3
10 8-state 8-PSK Rayleigh

8-state 8-PSK AWGN


BER

-4
10

-5
10

-6
10
5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10
Eb/No in dB

Fig. 7.18 BER performance


7.4 TCM Performance Analysis 231

Program 7.2

%MATLAB program to simulate BER performance of 8-state 8_PSK TCM in


AWGN%and Rayleigh fading channels
clear all; clc; close all;
global n k L nis M N S smap nl bps; global Cw Nes Prs
n=2;k=3;L=3;nis=512;
[Cw, Nes, Prs]=genpoly(n,k,L);%Generation of Trellis%
%Cw=codeword,Prs=previous state,Nes=next state
M=bitshift(1,n); N=nis; S=bitshift(1,L); [smap,bps,nl] = PSKmodSP(k);
Ec=1; EbN0dB=5; i=1; EbN0dB_stop=10;
nib=nis*n;% number of information bits
ncb=nis*k;% number of coded bits
while (EbN0dB <= EbN0dB_stop)
errorsa=0; bitsa=0; errorsr=0;
bitsr=0; frame=0;EbN0=10^(EbN0dB/10);
while (errorsa < 1000 && frame<=100)
Eb=Ec/((nib/ncb)*bps); %Eb =energy per bit
N0=Eb*(EbN0^-1);%N0=variance
inb=round(rand(1,nib));%inb=input bits
symbols =bits2symbol(n,nis,inb);
[Os,Ts]=tcmenc(symbols,Cw,Nes,smap);%Os=output sym-
bols,Ts=Transmitted signal
Rsa=Ts+ sqrt(N0/2)*(randn(size(Ts))+1i*randn(size(Ts))); %Rs=received
signal in AWGN channel
Rsr=Ts+sqrt(1/2)*0.3635*(randn(size(Ts))+1i*randn(size(Ts)))+
sqrt(N0/2)*(randn(size(Ts))+1i*randn(size(Ts))); %Rs=received signal Ray-
leigh fading channel
Pra=demodsymbols(Rsa,N0);Prr=demodsymbols(Rsr,N0);
decbitsa=bitsdecode(Pra);%decoded bits for AWGN
decbitsr=bitsdecode(Prr);%decoded bits for Rayleigh
errora=sum(decbitsa ~= inb);errorsa=errorsa+errora;
errorr=sum(decbitsr ~= inb);errorsr=errorsr+errorr;
bitsa=bitsa+sum(decbitsa ~= inb)+sum(decbitsa == inb);
bitsr=bitsr+sum(decbitsr ~= inb)+sum(decbitsr == inb);frame=frame+1;
end
EbN0dB=EbN0dB+1;
berawgn(i)=errorsa/bitsa; berray(i)=errorsr/bitsr; i=i+1; end
figure,
EbN0dB=[ 5 6 7 8 9 10];
berqpskawgn=BERAWGN(EbN0dB, 'psk', 4,'nondiff' );
berqpskfad=BERFADING(EbN0dB, 'psk', 4,1 );
semilogy(EbN0dB,berqpskfad ,'-*')
hold on
semilogy(EbN0dB,berqpskawgn,'-+')
semilogy(EbN0dB,berray,'-v')
semilogy(EbN0dB,berawgn,'-d')
legend('Uncoded QPSK Rayleigh ','Uncoded QPSK AWGN ','8-state 8-PSK
Rayleigh','8-state 8-PSK AWGN'); xlabel('Eb/No in dB');ylabel('BER');
232 7 Bandwidth Efficient Coded Modulation

The BER performance obtained by using the programs for frame length of 512
bits for both the AWGN and the Rayleigh fading channels is shown in Fig. 7.18.
The performance of the TCM in the AWGN channel is much better than the
performance in the Rayleigh fading channel. The uncoded QPSK BER performance
is also shown in Fig. 7.18 for both AWGN and Rayleigh fading channels, which
will serve as reference to compare the performance of the coded modulation scheme
in terms of coding gain.

7.5 Turbo Trellis Coded Modulation (TTCM)

Robertson has introduced the concept of the “Turbo Trellis Coded Modulation
(TTCM)” in [8] by using two recursive TCM encoders in parallel concatenation.
The system overview for TTCM is shown in Fig. 7.19.

7.5.1 TTCM Encoder

TTCM encoder contains the parallel concatenation of two TCM encoders as shown
in Fig. 7.20. Let the size of the interleaver be N. The number of modulated symbols
per block is N  n, with n ¼ D=2, where D is the signal set dimensionality. The
number of information bits transmitted per block is N  m. The encoder is clocked
in steps of n  T. Where T is the symbol duration of each transmitted 2ððmþ1Þ=nÞ -ary
symbol. In each step, m information bits are input and n symbols are transmitted,
yielding a spectral efficiency of m=n bits per symbol usage. The first TCM encoder
normally operates with the original bit sequence while the second encoder works
with the interleaved version of the input bit sequence.

TTCM Signal
Interleaver
Encoder Mapper

Channel

TTCM De- Signal


Decoder interleaver Demapper

Fig. 7.19 System overview for TTCM


7.5 Turbo Trellis Coded Modulation (TTCM) 233

TCM Encoder

Interleaver Deinterleaver

TCM Encoder

Fig. 7.20 TTCM encoder structure

A simple example will now serve to clarify the operation of the TTCM encoder
for the case of the following 8-state 8-PSK TCM with code rate 2/3 used in the
TTCM encoder structure depicted in Fig. 7.21. A sequence of length 6 information
bit pairs (00, 01, 11, 10, 00, 11) is encoded by the first encoder to yield the 8-PSK
sequence (0, 2, 7, 5, 1, 6). The information bits are interleaved on a pair wise basis
using a random interleaver (3, 6, 5, 2, 1, 4) and encoded again into the sequence
(6, 7, 0, 3, 0, 4) by the second encoder. We de-interleave the second encoder’s
output symbols to ensure that the ordering of the two information bits partly
defining each symbol corresponds to that of the first encoder, i.e., we now have the
sequence (0, 3, 6, 4, 0, 7). Finally, we transmit the first symbol of the first encoder,

8PSK
Constellation
Mapping

00001111

00110011

01010101

01234567
Symbol number

Fig. 7.21 TCM encoder used in TTCM encoder structure


234 7 Bandwidth Efficient Coded Modulation

the second symbol of the second encoder, the third of the first encoder, the fourth
symbol of the second encoder, and so on (0, 3, 7, 4, 1, 7). Thus, the transmitted
signal will be of the symbols (0, 3, 7, 4, 1, 7).

7.5.2 TTCM Decoder

A block diagram of turbo decoder is shown in Fig. 7.22. The TTCM decoder is
much similar to that of binary turbo codes, except the difference in the nature of the
information passed from one decoder to other decoder, respectively, and the
treatment of the very first decoding step. In symbol-based non-binary TTCM
scheme, the systematic bit as well as the parity bits are transmitted together as in the
form of complex enveloped symbol and cannot be separated from the extrinsic
components, since the noise and the fading that effect the parity components will
also affects the corresponding systematic components. Hence, in TTCM, the
symbol-based information can be split into two components:
1. The a-priori component of the non-binary symbol provided by the alternative
decoders.
2. The inseparable extrinsic information as well as the systematic components of
the non-binary symbol.
In the first step of TTCM decoding, the received symbols are separated into two
different symbols such that upper decoder receives only the symbols encoded by the
upper encoder and vice versa for the second decoder. Next, based on log-based
BCJR algorithms, each decoder produces its symbol-based probabilities and gen-
erates a priori and extrinsic information. Next to make sure that each of the decoder
does not receive the same information more than once, the decoders provides the
corresponding a posteriori which is subtracted with incoming a priori information.
By the random interleavers, the extrinsic information is then interleaved/de-inter-
leaved to become a priori information and made to iterate between them. Then, a
posteriori information is de-interleaved from the decoder-2 and uses the hard
decision for selecting the maximum a-posteriori probability associated with the
information word during the final decoding. In the first iteration, the a priori input of
the first decoder is initialized with the missing systematic information. Details of the
iterative decoder computations are given in the paper by [13].

7.5.3 Simulation of the BER Performance of the 8-State


8-PSK TTCM in AWGN and Rayleigh Fading Channels

The schematic for the TTCM is illustrated in Fig. 7.20. The 8-state 8-PSK TCM
encoder shown in Fig. 7.21 is used in this scheme for both the AWGN and Rayleigh
fading channels. The source here will be producing some random information bits,
7.5 Turbo Trellis Coded Modulation (TTCM) 235

Metric

“0” *
Metric s
-m log 2
Symbol
* First by
decoding symbol
Interleaver Map
(symbols)
All other

Deinterleaver
-
(a -priori)

Interleaver
Metric (a -priori)

“0” *

Symbol
by
= clock per step nT
Symbol
Map
Deinterleaver
= and hard dec

Output m bits
2m a-priori values - Per step

Fig. 7.22 TTCM decoder structure [from Robertson and Worz (1998); 9© 1998 IEEE.]

which is then encoded by one of the respective encoders and consecutively inter-
leaved by random interleavers. The interleaved bits/symbols are then modulated
according to symbol rule for each of the corresponding modulation schemes. The
channel discussed here for the coded modulation schemes is that of the AWGN and
Rayleigh-distributed flat fading.
236 7 Bandwidth Efficient Coded Modulation

The relationship between AWGN and Rayleigh fading channel can be expressed
as follows:

y t ¼ at x t þ nt ð7:5Þ

 2  at is the
where xt is the transmitted discrete signal and yt is received signal.
Rayleigh-distributed fading having an expected squared value of E at , and nt is
the complex AWGN having a noise variance of No =2 per dimension.
For an AWGN channel at ¼ 1. The receiver side consists of demodulator or
de-mapper followed by a de-interleaver and a TCM or TTCM decoder, which has
been explained in the previous chapter. A comparison of the BER performance of
8-state 8PSK TTCM in the AWGN and Rayleigh fading channel is shown in
Fig. 7.23.
A comparison of the BER performance of 8-state 8-PSK TCM and 8-state
8-PSK turbo TCM in AWGN channel is shown in Fig. 7.24.
An additional coding gain of about 1.7 dB has been achieved by the use of a
turbo TCM compared to the conventional TCM, at error rates in the vicinity of
10−4. This means that turbo TCM achieves a performance close to the Shannon
information capacity on an AWGN channel.

0
10
8-state 8-PSK TTCM Rayleigh
8-state 8-PSK TTCM AWGN
-1
10

-2
10
BER

-3
10

-4
10

-5
10

-6
10
5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10
Eb/No in dB

Fig. 7.23 Comparison of the BER performance of the 8-state 8-PSK TTCM in AWGN and
Rayleigh fading channel
7.6 Bit-interleaved Coded Modulation 237

Fig. 7.24 Comparison of the BER performance of 8-state 8-PSK TCM and 8-state 8-PSK turbo
TCM in AWGN channel

7.6 Bit-interleaved Coded Modulation

Bit-interleaved coded modulation (BICM) was the idea proposed by Zehavi [9] in
order to improve the diversity order of TCM scheme. Zehavi’s idea was to render
the code’s diversity equal to that smallest number of different bits by employing the
bit-based interleaving as shown in Fig. 7.25. The bit-based interleaving purpose is:

2 bits 3 bits

Bit 2 Bit Interleaver C


Bit 2 h
Convo- 8-PSK a
lutional Bit Interleaver Modu- n Coded Input
Encoder Bit 1 lator n
Bit 1 e
l
Bit Interleaver
Bit 0

Fig. 7.25 BICM principle


238 7 Bandwidth Efficient Coded Modulation

• To maximize the diversity order of the system and to disperse the bursty error
introduced by the correlated fading channel.
• To render the bit with respect to the Transmitted symbol uncorrelated or
independent of each other.

7.6.1 BICM Encoder

The BICM encoder as shown in Fig. 7.26 uses Paaske’s non-systematic eight-state
code [10] of a rate 2/3 having a free bit-based hamming distance of four for
optimum performance over Rayleigh fading channels. Initially, all the three shift
registers contents are set to zero. After the bits are encoded, the each encoded bits
will be interleaved by three individual parallel random interleavers of the length
equal to each incoming coded bits resulting in a binary vector. These groups of
three bits are then mapped to the 8-PSK signal set according to that of Gray
Mapping.
The content of the three memory elements represents the state of the encoder at
an instant. Denoting the state by S ¼ ðs2 s1 s0 Þ as shown in Fig. 7.27, there are eight
possible states S0 to S7 .
Figure 7.28 shows the trellis diagram with all possible transitions for the encoder
shown in Fig. 7.27.
The two-bit information b1 and b2 the encoded code word and next states is
given by

s 0 ¼ b1 ; s 1 ¼ s 2 ; s 2 ¼ b2 ð7:6Þ

Convolutional Encoder

Bit 2 Interleaver
Bit 2

8-PSK
Interleaver
Bit 1 Modu-
lator
Bit 1

Bit 0 Interleaver

Fig. 7.26 BICM Encoder with Paaske’s non-systematic convolutional encoder


7.6 Bit-interleaved Coded Modulation 239

Fig. 7.27 Paaske’s non-systematic convolutional encoder

C0 ¼ b1  s1 ; C1 ¼ b2  s0 ; C2 ¼ b1  s0  s1  s2  b2 ð7:7Þ

for the given set of the information bits b1 and b2 ; all possible combinations of the
code words, present, and next states are tabulated in Table 7.1.

7.6.2 BICM Decoder

The BICM decoder is shown in Fig. 7.29. The received faded noisy signal will be
demodulated into six-bit metric associated with three bit positions, each having
binary values of 0 and 1, from each received symbol. These bit metrics are then de-
interleaved by the three independent bit de-interleavers to form the estimated code
words. Then, the BCJR decoder is invoked for decoding these code words to
generate the best possible estimate of the original information bits.

7.7 Bit-interleaved Coded Modulation Using Iterative


Decoding

Li and Ritcey [11, 12] have proposed a new scheme of bit-interleaved coded
modulation using iterative decoding for further improvement of Zehavi’s BICM
scheme. The BICM-ID employs set partitioning signal labeling system as that of
Ungerboeck TCM and introduces soft-decision feedback from the decoder’s output
to the de-mapper/demodulator input to iterate between them. This is advantageous,
since it improves the reliability of the soft information passed to the de-mapper/
demodulator at each iteration.
240 7 Bandwidth Efficient Coded Modulation

7 7
00
01

10
11
6 6

5 5

4 4

3 3

2 2

1 1

0 0

Fig. 7.28 Trellis diagram of the encoder shown in Fig. 7.27

7.7.1 BICM-ID Encoder and Decoder

The BICM-ID’s encoder is similar to that BICM encoder explained in Fig. 7.26.
The BICM-ID’s decoder is almost similar to that of the BICM’s encoder except that
the iterative process is used to achieve global optimum through a step-by-step local
search.
7.7 Bit-interleaved Coded Modulation Using Iterative Decoding 241

Table 7.1 Code word table for the Paaske’s 8-state convolutional encoder shown in Fig. 7.27
Present Information bits
states 00 01 10 11
ðs2 s1 s0 Þ
Next Code Next Code Next Code Next Code
state word state word state word state word
000 000 000 001 101 100 110 101 011
001 000 110 001 011 100 000 101 101
010 000 101 001 000 100 011 101 110
011 000 011 001 110 100 101 101 000
100 010 100 011 001 110 010 111 111
101 010 010 011 111 110 100 111 001
110 010 001 011 100 110 111 111 010
111 010 111 011 010 110 001 111 100

Estimate of
Deinterleaver information
bit sequence
Received
faded noisy 8-PSK BCJR
signal DeMapper Deinterleaver Decoder Channel
Channel

Deinterleaver

Fig. 7.29 BICM decoder

Figure 7.30 shows the BICM-ID decoder. At the initial step, the received signal r
is demodulated and generates the extrinsic information of the coded bits Pðc; OÞ
which is interleaved by corresponding de-interleavers to become the a priori
information Pðc; I Þ to the log-based BCJR decoders to generate a posteriori bit
probabilities for the information and the coded word.

Interleaver

Demodulator Deinterleaver BCJR


r Decoder

Fig. 7.30 BICM-ID


242 7 Bandwidth Efficient Coded Modulation

On the second pass the extrinsic, a posteriori vectors are interleaved as a priori
information to the demodulator assuming that all the bits are independent of each
other (by a design of a good interleaver) and will again iterate the above-said steps
until the final step is reached. The total a posteriori probabilities of the information
bits can be computed to make the hard decisions at the output of the decoder after
the each iteration.
The SISO channel decoder uses the MAP algorithm similar to decoding of turbo
codes; here, the demodulator and the channel decoder exchange the extrinsic
information of the coded bits Pðc; OÞ and Pðc; OÞ through an iterative process. After
being interleaved, Pðc; OÞ and Pðc; OÞ become a priori information Pðc; I Þ and
Pðc; I Þ at the input of the BCJR decoder and the demodulator, respectively.

7.7.2 Simulation of the BER Performance of 8-State 8-PSK


BICM and BICM-ID in AWGN and Rayleigh Fading
Channels

Simulations are carried out for BICM and BICM-ID with the 8-state 8-PSK
encoders. The interleavers used here are three parallel independent random inter-
leavers. The BER performance of BICM in an AWGN channel for three parallel
512 bits interleavers and three parallel 3,000 bits interleavers is shown in Fig. 7.31.

0
10
BICM with 3-parallel 512 bit interleaver
BICM with 3-parallel 3000 bit interleaver
-1
10

-2
10
BER

-3
10

-4
10

-5
10
0 1 2 3 4 5 6 7 8
Eb/No(dB)

Fig. 7.31 BER performance of BICM in an AWGN channel


7.7 Bit-interleaved Coded Modulation Using Iterative Decoding 243

0
10

-1
10

-2
10
BER

-3
10

-4
10
* BICM with 3-parallel 512 bit interleaver
+ BICM-ID with 3-parallel 512 bit interleaver(2 iterations)
o BICM-ID with 3-parallel 512 bit interleaver(4 iterations)
-5
10
0 1 2 3 4 5 6 7 8
Eb/No(dB)

Fig. 7.32 BER performances of BICM and BICM-ID in AWGN channel

0
10

-1
10
BER

-2
10

-3
10
BICM with 3-parallel 1000 bit interleaver
BICM-ID with 3-parallel 1000 bit interleaver(4 iterations)

-4
10
0 1 2 3 4 5 6 7 8
Eb/No(dB)

Fig. 7.33 BER performance of BICM and BICM-ID in Rayleigh fading channel
244 7 Bandwidth Efficient Coded Modulation

From Fig. 7.31, it is observed that the BER performance of BICM does not sig-
nificantly depends on the frame length. The BER performance of BICM and BICM-
ID (with 2 iterations and 4 iterations) in an AWGN channel for three parallel 512
bits interleavers is shown in Fig. 7.32. It is seen from Fig. 7.32 that the BER
performance improves with the increased number of iterations. The BER perfor-
mance of BICM and BICM-ID (with 4 iterations) in Rayleigh fading channel for
three parallel 1,000 bits interleavers is shown in Fig. 7.33.

7.8 Problems

1. Obtain the set partition for 16 QAM.


2. Draw the Trellis diagram for the following 16-state 8-PSK TCM encoder and
find the asymptotic coding gain.

8-PSK
Constellation
Mapping
( ) 00001111

00110011

01010101
01234567
Symbol number

3. Compute the output symbols of the 8-state 8-PSK TTCM encoder shown in
figure for the input information bit pairs {00, 01, 11, 10, 00, 11}. Let the
interleaver be {3, 6, 5, 2, 1, 4}.
4. Construct the schematic diagram for the 8-state 16-QAM TTCM encoder.
Appendix A 245

Appendix A

function [Codeword,Nextstate, Previousstate]= genpoly(k,n,L)


%Init the shift register for the TCM
L=L+1; cpoly=getgenpoly(k,L-1);
H=zeros(k+1,L); K=zeros(1,k+1); D=zeros(1,L);
Codeword=zeros(2^(L-1),2^k); Nextstate=zeros(2^(L-1),2^k);
Previousstate=zeros(2^(L-1),2^k); h=0;
for i=1:L
h=mod(h,3);
if (h==0)
for m=1:k+1
K(1,m)=mod(cpoly(1,m),10); cpo-
ly(1,m)=floor(cpoly(1,m)/10);
end; end
h=h+1;
for m=1:k+1
H(m,i)=mod(K(1,m),2); K(1,m)=floor(K(1,m)/2); end;end
if (H(1,1)~=1)
error('TCM: the feedback poly is inacceptable');
end
for s=1:2^(L-1)
h=s-1;
for i=1:L-1
D(1,i) = mod(h,2); h = floor(h/2); end
for m=1:2^k
h=m-1;
for i=2:k+1
K(1,i) = mod(h,2); h=floor(h/2); end
h=D(1,1);
for i=2:k
h = mod((h+ K(1,i)*H(i,1)),2); end
Codeword(s,m) = 2*(m-1) + h;
K(1,1) = h; c = 1;Nextstate(s,m) = 0; %///* compute new state */
for j=1:L-1
%/* bit from previous reg. */
if(j < (L-1))
h=D(1,j+1);
else
h=0; end
for i=1:k+1 % /* input and feedback bits */
h = mod(h +K(1,i)*H(i,j+1),2); end
Nextstate(s,m)=Nextstate(s,m)+(h*c); %/* add to state */
c=c*2; end; end; end
for i=1:2^(L-1) %/Compute Pevious State/
for j= 1:2^k
Previousstate((Nextstate(i,j)+1),j)=(i-1); end; end
246 7 Bandwidth Efficient Coded Modulation

function [GenPoly] = getgenpoly(k,L)


switch(k)
case 1
switch (L)
case 1
GenPoly(1,1) = 1; GenPoly(1,2) = 2;
case 3
GenPoly(1,1) = 13; GenPoly(1,2) = 6;
case 4
GenPoly(1,1) = 23; GenPoly(1,2) = 6;
case 6
GenPoly(1,1) = 117; GenPoly(1,2) = 26;
case 7
GenPoly(1,1) = 217; GenPoly(1,2) = 110;
case 8
GenPoly(1,1) = 427; GenPoly(1,2) = 230;
case 9
GenPoly(1,1) = 1017; GenPoly(1,2) = 120;
otherwise
error('no generator for such code yet for 4QAM');
end
case 2
switch(L)
case 3
GenPoly(1,1) = 11; GenPoly(1,2) = 2; GenPoly(1,3) = 4;
case 4
GenPoly(1,1) = 23; GenPoly(1,2) = 2; GenPoly(1,3) = 10;
case 6
GenPoly(1,1) = 103; GenPoly(1,2) = 30; GenPoly(1,3) = 66;
case 7
GenPoly(1,1) = 277; GenPoly(1,2) = 54; GenPoly(1,3) = 122;
\ case 8
GenPoly(1,1) = 435; GenPoly(1,2) = 72; GenPoly(1,3) = 130;
otherwise
error('no generator for such code yet for 8 PSK');
end
end
end

function [smap,bps,nl] =PSKmodSP(varargin)


n=varargin{1};
nl=bitshift(1,n);
M=nl;
smap=zeros(1,nl);
bps=n;
for j=1:M
smap(1,j)=complex((cos(2*pi*(j-1)/M)),(sin(2*pi*(j-1)/M)));
end
Appendix A 247

function [ symbols ] = bits2symbol(word_length,block_lenght,bits_seq)


N=block_lenght*word_length;
symbols=zeros(1,block_lenght);
if (N ~= length(bits_seq))
error('bits_seq_to_symbol: check bits_seq.length()');
end
k=1;
for j=1:block_lenght
for i=1:word_length
symbols(1,j)= symbols(1,j)+ (bits_seq(1,(k))*bitshift(1,(i-1)));
k=k+1;
end
end
end

function [ Output_symbols,Tx_signals ] = tcmenc(symbols,Codeword,


Nextstate,Smap)
Output_symbols=zeros(1,length(symbols));
s=1;
for i=1:length(Output_symbols)
m=symbols(1,i)+1;
Output_symbols(1,i)=Codeword(s,m);
s=Nextstate(s,m)+1;
end
Tx_signals=zeros(1,length(Output_symbols));
for i=1:length(Tx_signals)
Tx_signals(1,i)=Smap(Output_symbols(1,i)+1);
end
end

function [ Pr ] = demodsymbols(varargin)
global M N smap nl nis;
recevied_signals= varargin{1};
sigma= varargin{2};
%Channel Matrix
Pr=zeros(N,2*M);
for k=1:nis
for i=1:nl
dist=hypot((real(recevied_signals(1,k))- real(smap(1,i))), (imag (rec-
vied_signals(1,k))- imag(smap(1,i))));
Pr(k,i)=-(dist*dist)/(sigma);
end
end
end
248 7 Bandwidth Efficient Coded Modulation

function [ b_decoded_bits ] = bitsdecode(Pr)


global M N S Cw Nes Prs MINF Interleave_mode_in;
MINF=-100000; %// define Minimum Log probability (-infinity)
Apr=zeros(N,M); Apo=zeros(N,M); OPr=zeros(N,2*M); Ip1=zeros(S,M,N);
if (strcmp(Interleave_mode_in,'ON'))
y=de_interleave(Pr); Pr=y; end
for j=1:N
for m=1:M
Apr(j,m)=-log(M); for i=1:S
Ip1(i,m,j)=Pr(j,Cw(i,m)+1); end; end; end
Alpha=zeros(N+1,S+1);Beta=zeros(N+1,S+1); for i=2:S
Alpha(1,i)=MINF; %/* compute Alpha*/ end
for k=2:(N+1)
max=MINF; for i=1:S
Alpha(k,i)=jacobianlog(Alpha(k-1,Prs(i,1)+1) + Ip1(Prs(i,1)+1,1,(k-
1))+Apr(k-1,1),Alpha(k-1,Prs(i,2)+1) + Ip1(Prs(i,2)+1,2,(k-1))+Apr(k-1,2));
for m=3:M
Alpha(k,i)=jacobianlog(Alpha(k,i),Alpha((k-
1),Prs(i,m)+1)+Ip1(Prs(i,m)+1,m,(k-1))+Apr(k-1,m)); end
if (max < Alpha(k,i))
max=Alpha(k,i); end; end
for i=1:S
Alpha(k,i)=Alpha(k,i)-max;
end end for i=1:S
Beta(N+1,i)=0; %/* compute beta */
end for k=N:-1:1
max=MINF;for i=1:S
Beta(k,i)=jacobianlog(Beta(k+1,Nes(i,1)+1) +
Ip1(i,1,k)+Apr(k,1),Beta(k+1,Nes(i,2)+1) + Ip1(i,2,k)+Apr(k,2));
for m=3:M
Beta(k,i)=jacobianlog(Beta(k,i),Beta(k+1,Nes(i,m)+1)+Ip1(i,m,k)+Apr(k,m));
end
if (max < Beta(k,i))
max=Beta(k,i); end; end
for i=1:S
Beta(k,i)=Beta(k,i)-max;
end end %/* compute apo */
for k=1:N
max=MINF; max_QPr=MINF;
for m=1:(2*M)
OPr(k,m)=MINF;
End for m=1:M
Apo(k,m)=MINF; for i=1:S
abc=Alpha(k,Prs(i,m)+1)+ Beta(k+1,i)+Ip1(Prs(i,m)+1,m,k);
Apo(k,m)=jacobianlog(Apo(k,m),abc);

OPr(k,Cw(Prs(i,m)+1,m)+1)=jacobianlog(OPr(k,Cw(Prs(i,m)+1,m)+1),abc+Apr(k,m
));
end
Apo(k,m)=Apo(k,m)+Apo(k,m); if (max < Apo(k,m))
max=Apo(k,m);
end end
for m=1:M
Apo(k,m)=Apo(k,m)- max;
end end
decoded_symbols=decode_symbols(Apo)
b_decoded_bits=symbol2bits(decoded_symbols); end
Appendix A 249

function [ r ] = jacobianlog( x,y )


%/*------ jacobian logarithm ------------------ */
if (x > y)
r=x + log (1 + exp(y-x));
else
r=y + log (1 + exp(x-y));
end
end

function [ output_symbols ] = decode_symbols(Apo)


global N M
output_symbols=zeros(1,N);
for k=1:N
i=0;
max=Apo(k,1);
for m=2:M
if (Apo(k,m) > max)
max=Apo(k,m);
i=m-1;
end
end
output_symbols(1,k)=i;
end
end

function [ bits ] = symbol2bits( symbols)


global n;
N=n*length(symbols);
bits=zeros(1,N);
if(N ~= length(bits))
msgbox(N,length(bits))
end
h=1;
for j=1:length(symbols)
for i=1:n
bits(h)=bitand(bitshift(symbols(1,j),-(i-1)),1);
h=h+1;
end
end
end
250 7 Bandwidth Efficient Coded Modulation

References

1. Ungerboeck, G.: Channel coding with multilevel/phase signals. IEEE Trans. Inf. Theory
IT-28, 55–67 (1982)
2. Forney Jr, G.D., Gallager, R.G., Lang, G.R., Longstaff, F.M., Qureshi, S.U.: Efficient
modulation for band-limited channels. IEEE Trans. Sel. Areas Commun. SAC-2, 632–647
(1984)
3. Wei, L.F.: Rotationally invariant convolutional channel coding with expanded signal space-
Part I: 180 degrees. IEEE Trans. Sel. Areas Commun. SAC-2, 659–672 (1984)
4. Wei, L.F.: Rotationally invariant convolutional channel coding with expanded signal space-
Part 11: nonlinear codes. IEEE Trans. Sel. Areas Commun. SAC-2, 672–686 (1984)
5. Calderbank, A.R., Mazo, J.E.: A new description of trellis codes. IEEE Trans. Inf. Theory.
IT-30, 784–791 (1984)
6. CCITT Study Group XVII, Recommendation V.32 for a family of 2-wire, duplex modems
operating on the general switched telephone network and on leased telephone-type circuits.
Document AP VIII-43-E, May 1984
7. CCITT Study Group XVII, Draft recommendation V.33 for 14400 bits per second modem
standardized for use on point-to-point 4-wire leased telephone-type circuits. Circular No. 12,
COM XVII/YS, Geneva, 17 May 1985
8. Robertson, P.: Bandwidth-efficient turbo trellis-coded modulation using punctured component
codes. IEE J. Sel. Areas Commun. 16, 206–218 (1998)
9. Zehavi, E.: 8-PSK trellis codes for Rayleigh fading channel. IEEE Trans. Commun. 40,
873–883 (1992). [3, 23]
10. Lin, S., Constello Jr, D.: Error control coding: fundamentals and applications. Prentice Hall,
Englewood Cliffs (1982). ISBN 013283796X
11. Li, X., Ritcey, J.A.: Bit interleaved coded modulation with iterative decoding. IEEE Commun.
Lett. 1, 169–171 (1997)
12. Li, X., Ritcey, J.A.: Trellis coded modulation with bit interleaving and iterative decoding.
IEEE J. Sel. Areas. Commun. 17, 715–724 (1999)
13. Robertson, P., Worz, P.: Bandwidth-Efficient Turbo Trellis-coded modulation using punctured
component codes. IEEE J. Sel. Areas. commun. 16, 206–218 (1998)
Chapter 8
Low Density Parity Check Codes

Low density parity check (LDPC) codes are forward error-correction codes,
invented by Robert Gallager in his MIT Ph.D. dissertation, 1960. The LDPC codes
are ignored for long time due to their high computational complexity and domi-
nation of highly structured algebraic block and convolutional codes for forward
error correction. A number of researchers produced new irregular LDPC codes
which are known as new generalizations of Gallager’s LDPC codes that outperform
the best turbo codes with certain practical advantages. LDPC codes have already
been adopted in satellite-based digital video broadcasting and long-haul optical
communication standards. This chapter discusses LDPC code properties, con-
struction of parity check matrix for regular and irregular LDPC codes, efficient
encoding and decoding of LDPC codes, and performance analysis of LDPC codes.

8.1 LDPC Code Properties

LDPC code is a linear error correction code that has a parity check matrix H, which is
sparse, i.e., with less nonzero elements in each row and column. LDPC codes can be
categorized into regular and irregular LDPC codes. When the parity check matrix
HðnkÞk has the same number wc of ones in each column and the same number wr of
once in each row, the code is a regular ðwc ; wr Þ. The original Gallager codes are
regular binary LDPC codes. The size of H is usually very large, but the density of
nonzero element is very low. LDPC code of length n can be denoted as an
ðn; wc ; wr Þ LDPC code. Thus, each information bit is involved with wc parity checks,
and each parity check bit is involved with wr information bits. For a regular code, we
have ðn  kÞwr ¼ nwc , thus wc \wr . If all rows are linearly independent, the code
rate is ðwrww r

; otherwise, it is k=n. Typically, wc  3 a parity check matrix
with minimum column weight wc will have a minimum distance dmin  wc þ 1:

Electronic supplementary material The online version of this chapter (doi:10.1007/978-81-


322-2292-7_8) contains supplementary material, which is available to authorized users.

© Springer India 2015 251


K. Deergha Rao, Channel Coding Techniques for Wireless Communications,
DOI 10.1007/978-81-322-2292-7_8
252 8 Low Density Parity Check Codes

When wc  3, there is at least one LDPC code whose minimum distance dmin grows
linearly with the block length n [1]; thus, a longer code length yields a better coding
gain. Most regular LDPC codes are constructed with wc and wr on the order of 3 or 4.

8.2 Construction of Parity Check Matrix H

8.2.1 Gallager Method for Random Construction


of H for Regular Codes

In this method, the transpose of regular (n; wc ; wr Þ parity check matrix H has the
form
h i
H T ¼ H1T ; H2T ; . . .. . .; HwTc ð8:1Þ

The matrix H1 has n columns and n=wr rows. The H1 contains a single 1 in each
column and contains 1s in its ith row from column ði  1Þwr þ 1 to column iwr .
Permuting randomly the columns of H1 with equal probability, the matrices H2 to
Hwc are obtained.
The parity check matrix for ðn ¼ 20; wc ¼ 3; wr ¼ 4Þ code constructed by
Gallager [1] is given as
2 3
1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
60 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 07
6 7
60 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 07
6 7
60 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 07
6 7
60 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 17
6 7
61 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 07
6 7
60 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 07
6 7
H¼6
60 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 077
60 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 07
6 7
60 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 17
6 7
61 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 07
6 7
60 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 07
6 7
60 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 07
6 7
40 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 05
0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
ð8:2Þ

The following MATLAB program can be used to generate Gallager regular


parity check matrix H with different code rates.
8.2 Construction of Parity Check Matrix H 253

Program 8.1 MATLAB program to generate Gallager regular parity check matrix

8.2.2 Algebraic Construction of H for Regular Codes

The construction of the parity check matrix H using algebraic construction as


follows [2, 3]. Consider an identity matrix Ia where a [ ðwc  1Þðwr  1Þ and
obtain the following matrix by cyclically shifting the rows of the identity matrix Ia
y one position to the right.
2 3
0 1 0 0  0
60 0 1 0  07
6 7
A¼6
60 0 0 1  077 ð8:3Þ
40 0 0 0  15
1 0 0 0  0
254 8 Low Density Parity Check Codes

Defining A0 ¼ Ia the parity check matrix H can be constructed as


2 3
A0 A0 A0 ... A0
6 A0 A1 A2 ... Aðwr 1Þ 7
6 7
H¼6
6A
0
A2 A4 ... A2ðwr 1Þ 7
7 ð8:4Þ
4 ... 5
A0 Aðwc 1Þ A2ðwc 1Þ ... Aðwc 1Þðwr 1Þ

The constructed H matrix has wc a rows and wr a columns, and it is of a regular


ðwr a; wc ; wr Þ having the same number of wr ones in each row and the same number
of wc ones in each column. It is four-cycle free construction. The algebraic LDPC
codes are easier for decoding than random codes. For intermediate n, well-designed
algebraic codes yield a low BER [4, 5].
Example 8.1 Construct H matrix with wc ¼ 2 and wr ¼ 3 using algebraic con-
struction method.
Solution Since ðwc  1Þðwr  1Þ ¼ 2

2 3
0 1 0
A ¼ 40 0 1 5;
1 0 0
2 3
1 0 0 1 0 0 1 0 0
  6 0 1 0 0 1 0 0 1 0 7
6 7
A0 A0 A0 6 0 0 1 0 0 1 0 0 1 7
H¼ ¼6 7
A0 A1 A2 6 1 0 0 0 1 0 0 0 1 7
4 0 1 0 5
0 0 1 1 0 0
0 0 1 1 0 0 0 1 0

8.2.3 Random Construction of H for Irregular Codes

In the random construction of the parity check matrix H, the matrix is filled with
ones and zeros randomly satisfying LDPC properties. The following MATLAB
program generates rate 1/2 irregular parity check matrix H with ones distributed
uniformly at random within the column.
8.2 Construction of Parity Check Matrix H 255

Program 8.2 MATLAB program to generate rate 1/2 irregular parity check matrix H

An example of parity check matrix for irregular LDPC code is


2 3
1 1 0 1 1 0 0 1 0 0
60 1 1 0 1 1 1 0 0 07
6 7
H¼6
60 0 0 1 0 0 0 1 1 177 ð8:5Þ
41 1 0 0 0 1 1 0 1 05
0 0 1 0 0 1 0 1 0 1

8.3 Representation of Parity Check Matrix Using Tanner


Graphs

The Tanner graph of the parity check matrix H is a bipartite graph. It has bit nodes
or variable nodes (VN) equal to the number of columns of H, and check nodes
(CNs) equal to the number of rows of H. If Hji ¼ 1; i.e., if variable i participates in
the jth parity check constraint, then check node j is connected to variable node i.
256 8 Low Density Parity Check Codes

Check node

Bit node

Fig. 8.1 Tanner graph of H matrix of Example 8.2

Example 8.2 Construct Tanner graph for the following parity check matrix
2 3
1 1 0 0 1 1 1 1 0 0
61 0 1 1 0 1 0 1 0 17
6 7
60 1 0 1 1 0 0 1 1 17
6 7
41 0 1 0 1 0 1 0 1 15
0 1 1 1 0 1 1 0 1 0

Solution The H matrix has 10 columns and 5 rows. Hence, the associated tanner
graph with 10 bit nodes and 5 CNs is shown in Fig. 8.1.

8.3.1 Cycles of Tanner Graph

Consider the following parity check matrix


2 3
1 1 0 1 0 0
61 1 0 0 1 07
H¼6
40
7 ð8:6Þ
0 1 0 1 15
0 0 1 1 0 1

The Tanner graph of the H matrix is shown in Fig. 8.2. A sequence of connected
nodes starting and ending at the same node with no node more than once is a cycle
of a Tanner graph. The number of edges in a cycle is called cycle length and the
smallest size of the cycle in a graph represents the girth of the graph. Cycles of
length 4 situations arise where pairs of rows share 1s in a particular pair of columns
of the above H matrix. A cycle of length 4 is shown in bold in Fig. 8.2.
8.3 Representation of Parity Check Matrix Using Tanner Graphs 257

Check nodes

Bit nodes

Fig. 8.2 A Tanner graph with a cycle of length 4

The minimum lower bound distance for four-cycle-free ðwc ; wr Þ regular LDPC
code parity check matrix with girth g is given by [6]
(
1 þ wc þ wc ðwc  1Þ þ wc ðwc  1Þ2 þ    þ wc ðwc  1Þðg6Þ=4 for odd g=2
dmin  g8
1 þ wc þ wc ðwc  1Þ þ wc ðwc  1Þ2 þ    þ wc ðwc  1Þ 4 otherwise
ð8:7Þ

Thus, the minimum distance can be increased by increasing the girth or the
column weight.

8.3.2 Detection and Removal of Girth 4 of a Parity Check


Matrix

If the Tanner graph of a parity check matrix contains no loops, then this decoding is
quickly computable. Unfortunately, LDPCs have loopy graphs, and so the algo-
rithm needs to be repeatedly iterated until it converges to a solution. The effect of
girth on the performance of LDPC codes can be reduced by choosing the codes
having Tanner graphs with longer girths. However, longer girths are not helpful for
finite length codes. A girth of 6 is sufficient, and hence, the removal of girth 4 is a
required. A lemma in [7] states that the H matrix has no girth 4, if and only if all the
entries of the matrix ½H T H  are 1s except the diagonal line.
A standard approach [8] is to search the parity check matrix H forming a
rectangle of four 1s in the matrix. Eliminating the rectangle by reshuffling some
elements around while preserving the other relevant properties of the matrix is
equivalent to removing a girth 4 from the Tanner graph.
The detection and removal of girth 4 is illustrated through the following
numerical example using MATLAB.
258 8 Low Density Parity Check Codes

Example 8.3 Consider the following (10, 3, 6) regular parity check matrix
2 3
1 1 1 1 0 1 1 0 0 1
60 0 1 1 1 1 1 1 0 07
6 7
H¼6
60 1 0 1 0 1 0 1 1 177
41 0 1 0 1 0 0 1 1 15
1 1 0 0 1 0 1 0 1 1

The following MATLAB program can be used for detection and removal of girth
of the given H matrix.

Program 8.3 MATLAB program for detection and removal of girth 4 of a given
parity check matrix H

The results obtained from the above MATLAB program are shown in Figs. 8.3
and 8.4.
8.3 Representation of Parity Check Matrix Using Tanner Graphs 259

2.5

1.5

0.5

0
10
10
8
5 6
4
2
0 0

Fig. 8.3 Entries of H with girth 4

0.8

0.6

0.4

0.2

0
5
4 10
8
3 6
2 4
2
1 0

Fig. 8.4 Entries of girth 4 free H


260 8 Low Density Parity Check Codes

From Fig. 8.3, it is observed that all the entries of the matrix ½H T H  except
diagonal line are not 1s. Hence, the given H matrix has girth 4, whereas Fig. 8.4
shows girth 4 free H.

8.4 LDPC Encoding

8.4.1 Preprocessing Method

For coding purposes, we may derive a generator matrix G from the parity check
matrix H for LDPC codes by means of Gaussian elimination in modulo-2 arith-
metic. Since the matrix G is generated once for a parity check matrix, it is usable in
all encoding of messages. As such this method can be viewed as the preprocessing
method.
1-by-n code vector c is first partitioned as

C ¼ ½b : m  ð8:8Þ

where m is k by 1 message vector, and b is the n  k by 1 parity vector corre-


spondingly the parity check matrix H is partitioned as
2 3
H1
HT ¼ 4    5 ð8:9Þ
H2

where H1 is a square matrix of dimensions ðn  kÞ  ðn  kÞ, and H2 is a rectan-


gular matrix of dimensions k  ðn  kÞ transposition symbolized by the superscript
T is used in the partitioning of matrix H or convenience of representation.
Imposing the constraint CH T ¼ 0.
We may write
2 3
H1
½b : m4    5 ¼ 0 ð8:10Þ
H2

or equivalently,

bH1 þ mH2 ¼ 0 ð8:11Þ

The vectors m and b are related by

b ¼ mP ð8:12Þ
8.4 LDPC Encoding 261

where P is the coefficient matrix. For any nonzero message vector m, the coefficient
matrix of LDPC codes satisfies the condition.

PH1 þ H2 ¼ 0 ð8:13Þ

which holds for all nonzero message vectors and, in particular, in the form
½0 . . . 0 1 0 . . . 0 that will isolate individual rows of the generator matrix. Solving
Eq. (8.13) for matrix P, we get

P ¼ H2 H11 ð8:14Þ

where H11 is the inverse matrix of H1 , which is naturally defined in modulo-2


arithmetic. Finally, the generator matrix of LDPC codes is defined by
 
G ¼ ½P : Ik  ¼ H2 H11 : Ik ð8:15Þ

where Ik is the k by k identity matrix. The code word can be generated as

C ¼ mG ð8:16Þ

Example 8.4 Construct generator matrix G for the following (10, 3, 5) regular
parity check matrix.
2 3
1 1 0 1 0 1 : 0 0 1 0
60 1 1 0 1 0 : 1 1 0 07
6 7
61 0 0 0 1 1 : 0 0 1 17
6 7
60 1 1 1 0 1 : 1 0 0 07
6 7
41 0 1 0 1 0 : 0 1 0 15
0 0 0 1 0 0 : 1 1 1 1

Solution
262 8 Low Density Parity Check Codes

2 3
1 0 1 0 1 0 2 3
61 1 0 1 0 07 0 1 0 1 0 1
6 7
60 1 0 1 1 07 60 1 0 0 1 17
H1 ¼ 6
61
7 H2 ¼ 6
41
7
6 0 0 1 0 177 0 1 0 0 15
40 1 1 0 1 05 0 0 1 0 1 1
1 0 1 1 0 0

Letting mH2 ¼ u, the following relation can be written from Eq. (8.11)
2 3
1 0 1 0 1 0
61
6 1 0 1 0 077
60 1 0 1 1 077 ¼ ½ u0
½ b0 b1 b2 b3 b4 b5 6 u1 u2 u3 u4 u5 
61
6 0 0 1 0 177
40 1 1 0 1 05
1 0 1 1 0 0

The above relation between b and u leads to the following equations:

b 0 þ b 1 þ b 3 þ b 5 ¼ u0
b 1 þ b 2 þ b 4 ¼ u1
b 0 þ b 4 þ b 5 ¼ u2
b 1 þ b 2 þ b 3 þ b 5 ¼ u3
b 0 þ b 2 þ b 4 ¼ u4
b 3 ¼ u5

Solving the above equations, using modulo-2 arithmetic, we obtain

b0 ¼ u 1 þ u2 þ u3 þ u5
b1 ¼ u 2 þ u3 þ u4 þ u5
b2 ¼ u 0 þ u1 þ u2 þ u5
b3 ¼ u5
b4 ¼ u 0 þ u3 þ u4
b5 ¼ u 0 þ u1 þ u4 þ u5

Since b ¼ uH11 , the above equations can be write in matrix form as


2 3
0 0 1 0 1 1
61 0 1 0 0 17
6 7
61 1 1 0 0 07
b ¼ ½u6
61
7
6 1 0 0 1 077
40 1 0 0 1 15
1 1 1 1 0 1
8.4 LDPC Encoding 263

Thus,
2 3
0 0 1 0 1 1
61 0 1 0 0 17
6 7
61 1 1 0 0 07
H11 ¼6
61
7
6 1 0 0 1 077
40 1 0 0 1 05
1 1 1 1 0 1
2 3
0 0 1 0 1 1
2 3
0 1 0 1 0 1 6 61 0 1 0 0 177
60 7 6 7
6 1 0 0 1 1 76 1 1 1 0 0 07
H2 H11 ¼ 6 76 7
41 0 1 0 0 1 5661 1 0 0 1 077
6 7
0 0 1 0 1 1 40 1 0 0 1 15
1 1 1 1 1 1
2 3
1 0 0 1 1 0
60 17
6 0 0 1 1 7
¼6 7
40 0 1 1 1 15
0 1 0 1 1 0
 
The generator matrix G ¼ H2 H11 Ik

Example 8.5 Construct LDPC code word for the following parity check matrix
with the message vector m ¼ ½1 0 0 0 1.
2 3
1 1 0 0 1 1 1 1 0 0
61 0 1 1 0 1 0 1 0 17
6 7
H¼6
60 1 0 1 1 0 0 1 1 177
41 0 1 0 1 0 1 0 1 15
0 1 1 1 0 1 1 0 1 0
264 8 Low Density Parity Check Codes

 The
Solution  parity check matrix H is of the order 5 × 10. We know that
H
HT ¼ 1
; then,
H2
2 3
1 1 0 1 0
61 0 1 0 17
6 7
60 1 0 1 17
6 7
60 1 1 0 17
6 7
61 0 1 1 07
H ¼6
T
61
7
6 1 0 0 177
61 0 0 1 17
6 7
61 1 1 0 07
6 7
40 0 1 1 15
0 1 1 1 0
2 3 2 3
1 1 0 1 0 1 1 0 0 1
61 0 1 0 17 61 0 0 1 17
6 7 6 7
H1 ¼ 6
60 1 0 1 17 6
7 and H2 ¼ 6 1 1 1 0 077
40 1 1 0 15 40 0 1 1 15
1 0 1 1 0 0 1 1 1 0

Letting mH2 ¼ u, the following relation can be written from Eq. (8.11)
2 3
1 1 0 1 0
6 7
6 1 0 1 0 1 7
6 7
½ b0 b1 b2 b3 b 4 6 0 1 0 1 1 7 ¼ ½ u0 u1 u2 u3 u4 
6 7
6 0
4 1 1 0 1 7
5
1 0 1 1 0

The above relation between b and u leads to the following equations


b 0 þ b1 þ b4 ¼ u0
b 0 þ b2 þ b3 ¼ u1
b 1 þ b3 þ b4 ¼ u2
b 0 þ b2 þ b4 ¼ u3
b 1 þ b2 þ b3 ¼ u4

Solving the above equations, we obtain


b 0 ¼ u 2 þ u3 þ u4
b1 ¼ u 1 þ u2 þ u3
b2 ¼ u 0 þ u1 þ u2
b3 ¼ u 0 þ u3 þ u4
b4 ¼ u 0 þ u1 þ u4
8.4 LDPC Encoding 265

Since b ¼ uH11 , the above equations can be written in matrix form as


2 3
0 0 1 1 1
60 1 1 0 17
6 7
b ¼ ½u6
61 1 1 0 077
41 1 0 1 05
1 0 0 1 1

Thus,
2 3
0 0 1 1 1
6 7
60 1 1 0 17
6 7
H11 ¼6
61 1 1 0 077
6 7
41 1 0 1 05
1 0 0 1 1
2 32 3 2 3
1 1 0 0 1 0 0 1 1 1 1 1 0 0 1
6 76 7 6 7
61 0 0 1 1 76 0 1 1 0 17 60 1 1 1 07
6 76 7 6 7
H2 H11 ¼6
61 1 1 0 07 6
76 1 1 1 0 07 6
7 ¼ 61 0 1 1 077
6 76 7 6 7
40 0 1 1 1 54 1 1 0 1 05 41 0 1 0 15
0 1 1 1 0 1 0 0 1 1 0 1 0 1 1
 
The generator matrix G ¼ H2 H11 Ik
2 3
1 1 0 0 1 1 0 0 0 0
60 1 1 1 0 0 1 0 0 07
6 7
G¼6
61 0 1 1 0 0 0 1 0 077
41 0 1 0 1 0 0 0 1 05
0 1 0 1 1 0 0 0 0 1

The code word can be generated as C ¼ mG.


2 3
1 1 0 0 1 1 0 0 0 0
6 7
60 1 1 1 0 0 1 0 0 07
6 7
C ¼ ½1 0 0 0 1 6
61 0 1 1 0 0 0 1 0 077
6 7
41 0 1 0 1 0 0 0 1 05
0 1 0 1 1 0 0 0 0 1

C ¼ ½1 0 0 1 0 1 0 0 0 1
266 8 Low Density Parity Check Codes

2 3
1 1 0 1 0
61
6 0 1 0 177
6 7
60 1 0 1 17
6 7
60
6 1 1 0 177
6 7
61 0 1 1 07
CH T ¼ ½ 1 0 0 1 0 1 0 0 0 1 6 7 ¼ ½0 0 0 0 0
61
6 1 0 0 177
6 7
61
6 0 0 1 177
61
6 1 1 0 077
6 7
40 0 1 1 15
0 1 1 1 0

8.5 Efficient Encoding of LDPC Codes

The preprocessing method discussed in Sect. 8.4.1 for finding a generator matrix G
for a given H can be used for encoding any arbitrary message bits vector of size
1  m. However, it has a complexity of Oðn2Þ [9]. LDPC code can be encoded
using the parity check matrix directly by using the efficient encoding method [6]
which has a complexity of OðnÞ. The stepwise procedure of efficient coding of
LDPC coding [10] is as follows:
Step 1: By performing row and column permutations, the non-singular parity check
matrix H is to be brought into a lower triangular form indicates in Fig. 8.5.
More precisely, the H matrix is brought into the form
 
A B T
Ht ¼ ð8:17Þ
C D E

with a gap g as small as possible. Where A is ðm  gÞ  ðn  mÞ matrix, B is


ðm  gÞ  g matrix, T is ðm  gÞ  ðm  gÞ matrix, C is g  ðn  mÞ matrix, D is
g  g matrix and E is g  ðm  gÞ matrix. All of these matrices are sparse and T is
lower triangular with ones along the diagonal.

Img 0
Step 2: Premultiply Ht by
ET 1 Ig
8.5 Efficient Encoding of LDPC Codes 267

Fig. 8.5 The parity check matrix in approximate lower triangular form

    
Img 0 A B T A B T
¼ ð8:18Þ
ET 1 Ig C D E ET 1 A þ C ET 1 B þ D 0

In order to check that ET 1 B þ D is non-singular. It is to be ensured by per-


forming column permutations further.
Step 3: Obtain p1 using the following
 
pT1 ¼ ;1 ET 1 A þ C sT ð8:19Þ

where
; ¼ ET 1 B þ D and s is message vector.
Step 4: Obtain p2 using the following
 
pT2 ¼ T 1 AsT þ BpT1 ð8:20Þ

Step 5: Form the code vector c as

c ¼ ½s p1 p2  ð8:21Þ

p1 holds the first g parity and p2 contains the remaining parity bits.

Example 8.6 Construct LDPC code word for the following parity check matrix
with the message vector m ¼ ½1 0 0 0 1.
268 8 Low Density Parity Check Codes

2 3
1 1 0 0 1 1 1 1 0 0
61 0 1 1 0 1 0 1 0 17
6 7
H¼6
60 1 0 1 1 0 0 1 1 177
41 0 1 0 1 0 1 0 1 15
0 1 1 1 0 1 1 0 1 0

Solution
Step 1: Second and third rows and third and tenth columns are swapped to obtain

Step 2:
2 3
1 0 0  
0 1 1
T 1 ¼ 41 1 05 E¼
0 1 1
1 0 1

Step 3:
 
 1
1  1
T 0
pT1 ¼  ET B þ D ET A þ C s ¼
0

Step 4:
2 3
  0
pT2 ¼ T 1
AsT þ BpT1 ¼ 4 1 5
1
8.5 Efficient Encoding of LDPC Codes 269

Step 5:

c ¼ ½ s p1 p2  ¼ ½ 1 0 0 0 1 0 0 0 1 1 :

8.5.1 Efficient Encoding of LDPC Codes Using MATLAB

The following example illustrates the efficient encoding of LDPC codes using
MATLAB.
Example 8.7 Write a MATLAB program to encode a random message vector with
the following parity check matrix.
2 3
1 1 0 1 1 0 0 1 0 0
60 1 1 0 1 1 1 0 0 17
6 7
H¼6
60 0 0 1 0 0 0 1 1 177
41 1 0 0 0 1 1 0 1 05
0 0 1 0 0 1 0 1 0 1

Program 8.4 MATLAB program for efficient encoding of LDPC Codes

u
270 8 Low Density Parity Check Codes

8.6 LDPC Decoding

In the LDPC decoding, the notation Bj is used to represent the set of bits in the
parity check equation of H, and the notation Ai is used to represent the parity check
equations for the ith bit of the code. Consider the following parity check matrix
2 3
1 1 1 0 0 0
61 0 0 1 1 07
H¼6
40
7 ð8:22Þ
1 0 1 0 15
0 0 1 0 1 1

For the above parity check matrix, we get

B1 ¼ f1; 2; 3g; B2 ¼ f1; 4; 5g; B3 ¼ f2; 4; 6g; B4 ¼ f3; 5; 6g;


A1 ¼ f1; 2g; A2 ¼ f1; 3g; A3 ¼ f1; 4g; A4 ¼ f2; 3g; A5 ¼ f2; 4g; A6 ¼ f3; 4g
8.6 LDPC Decoding 271

8.6.1 LDPC Decoding on Binary Erasure Channel Using


Message-Passing Algorithm

The message-passing algorithms are iterative decoding algorithms which passes the
messages back and forward between the bit and CN iteratively until the process is
stopped. The message-labeled Mi indicates 0 or 1 for known bit values and e for
erased bit the stepwise procedure for LDPC decoding on BEC is as follows:
Step 1: Set M ¼ y, find Bj and Ai of H
Step 2: iter ¼ 1
Step 3: If all messages into check j other than Mi are known, compute all check
sums by using the following expression
X
Ej;i ¼ ðMi0 mod 2Þ
i0 2Bj ;i0 6¼i

else Ej;i ¼ e
Step 4: If Mi ¼ e and if j 2 Ai subject to Ej;i 6¼ e; set Mi ¼ Ej;i :
Step 5: If all Mi are known or iter ¼ iter max , stop, else
Step 6: iter ¼ iter þ 1, go to Step 3.

Example 8.8 For the parity check matrix given by Eq. (8.22), c ¼ ½1 0 1 1 0 1 is a
valid code word since cH T ¼ 0. If the code word is sent through BEC, the received
vector is y ¼ ½1 0 e e e 1. Decode the received vector to recover the erased bits using
message-passing algorithm.
Solution For Step 3 of the algorithm, the first check node is joined to the first,
second, and third bit nodes having incoming messages 1, 0, and e, respectively.
This check node has one incoming e message from the third bit node. Hence, we
can calculate the outgoing message E1;3 on the edge from the first check node to the
third bit node:

E1;3 ¼ M1 þ M2
¼10
¼ 1:

The second check node is joined to the first, fourth, and fifth bit nodes having
incoming messages 1, e, and e, respectively. As this check node has two e mes-
sages, the outgoing messages from this check node are all e.
The third check node is joined to the second, fourth, and sixth bits receiving
incoming messages 0, e, and 1, respectively. This check node has one incoming e
272 8 Low Density Parity Check Codes

message from the fourth bit node. Hence, the outgoing message E1;3 on the edge
from the third check node to the fourth bit node is given by

E3;4 ¼ M2 þ M6
¼01
¼ 1:

The fourth check node includes the third, fifth, and sixth bits and receives e, e,
and 1 messages, respectively. Since this check node receives two e messages, the
outgoing messages from this check node are all e.
In Step 4 of the algorithm, each bit node with an unknown value to updates its
value uses its incoming messages. The third bit is unknown and has incoming
messages 1 (E1;3 ) and e (E4;3 ) and hence the third bit value becomes 1. The fourth
bit is not known and it is set to 1 as it has incoming messages 1 (E2;4 ) and e (E3;4 ).
The fifth bit is also unknown but its value cannot be changed because has e (E2;5 )
and e (E4;5 ) as incoming messages. Thus, at the end of the Step 4,

M ¼ ½1 0 1 1 e 1 :

Since the fifth bit is remaining unknown and hence the algorithm is to be
continued. In the second iteration, in the Step 3 of the algorithm, the second check
node is joined to the first, fourth and fifth bit nodes and so this check node has one
incoming e message, M5 . Hence, the outgoing message from this check node
becomes

E2;5 ¼ M1 þ M4
¼11
¼ 0:
The fourth check node is joined to the third, fifth, and sixth bit nodes having one
incoming e message, M5 . The outgoing message from this check to the sixth bit
node, E4;6 , is the value of the sixth code word bit:

E4;5 ¼ M3 þ M6
¼11
¼ 0:

In the second iteration, in the Step 4, the unknown fifth bit is changed to 0 as it
has E2;5 and E4;5 as incoming messages with value 0. The algorithm is stopped and
the decoded code word is
8.6 LDPC Decoding 273

(a)
Initialization:
Check messages

Bit nodes
1 0 e e e 1

(b)
First iteration:
Check messages

Bit nodes

Check messages

(c)
Check messages

Bit nodes
1 0 1 1 e 1
Bit nodes

(d)
Second iteration:

Check messages

Bit nodes

Check messages

(e)
Check messages

Bit nodes
1 0 1 1 0 1
Bit nodes

Fig. 8.6 Decoding of received vector y ¼ ½1 0 e e e 1 using message passing. The dark line
corresponds to message bit 1, solid line corresponds to message bit 0, and the broken line
corresponds to erasure bit e
274 8 Low Density Parity Check Codes

^c ¼ M ¼ ½ 1 0 1 1 0 1

as the decoded code word Fig. 8.6 shows the graphical representation of message-
passing decoding.

8.6.2 LDPC Decoding on Binary Erasure Channel Using


MATLAB

The following example illustrates decoding of LDPC codes on BEC using


MATLAB.
Example 8.9 Write a MATLAB program to implement LDPC decoding on BEC by
assuming received vector y ¼ ½1 e e e e 1 0 0 0 1 e 1 e e e 1 when the following parity
is used to encode the code word.

2 3
1 1 1 1 0 1 1 0 0 0 0 0 0 0 0 0
61 0 0 1 1 1 0 1 0 0 0 0 0 0 0 07
6 7
60 0 1 1 1 0 0 0 1 0 0 0 0 0 0 07
6 7
60 1 1 1 1 0 0 0 0 1 0 0 0 0 0 07
6 7
60 1 0 0 0 0 0 0 1 1 0 0 0 0 0 07
6 7
H¼6
61 1 0 1 1 0 0 0 0 0 1 0 0 0 0 077
61 1 0 1 0 0 0 0 0 0 0 1 0 0 0 07
6 7
61 1 0 0 0 1 0 0 0 0 0 0 1 0 0 07
6 7
61 1 1 0 1 0 0 0 0 0 0 0 0 1 0 07
6 7
40 1 1 1 0 0 0 0 0 0 0 0 0 0 1 05
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

Solution The following MATLAB program decodes the received vector y. In this
program, known bit values are indicated by 1 or 0 and erased bit is indicated by −1.
8.6 LDPC Decoding 275

Program 8.5 MATLAB program Decoding of LDPC Codes on BEC

8.6.3 Bit-Flipping Decoding Algorithm

The received symbols are hard decoded into 1s and 0s to form a binary received
vector y. In each iteration, it computes all check sums, as well as the number of
unsatisfied parity checks involving each of the n bits of the vector y. Next, the bits
of y are flipped if they involve in the largest number of unsatisfied parity checks.
The process is to be repeated until all check sums are satisfied or reaches a
276 8 Low Density Parity Check Codes

predetermined number of iterations. The stepwise procedure of the Bit-flipping


decoding algorithm is as follows:
Step 1: Set M ¼ y, define Bj to represent the jth parity check equation of H
Step 2: l ¼ 0
Step 3: Compute all check sums by using the following expression
X
Ej;i ¼ ðMi0 mod 2Þ ð8:23Þ
i0 2Bj ;i0 6¼i

Step 4: Compute the number of unsatisfied parity checks involving each of n bits of
message
Step 5: Flip the bits of message when they are involved in largest number of
unsatisfied parity checks. The flipping on ith bit can be performed by using

Mi ¼ ðyi þ 1mod 2Þ ð8:24Þ

Step 6: Compute s as follows


 
s ¼ MH T mod 2 ð8:25Þ

Step 7: If s ¼ 0 or l ¼ lmax , stop, else


Step 8: l ¼ l þ 1, go to Step 3.

Example 8.10 For the parity check matrix given by Eq. (8.22), c ¼ ½1 0 1 1 0 1 is a
valid code word since cH T ¼ 0. If the code word is sent through AWGN channel,
the received vector after a detector hard decision is y ¼ ½0 0 1 1 0 1. Decode the
received vector using bit-flipping algorithm.
Solution The decoder makes a hard decision on each code word bit and returns

y ¼ ½0 0 1 1 0 1 :

Step 1: Initializing Mi ¼ yi , so

M ¼ ½0 0 1 1 0 1 :

Step 2: l ¼ 0
Step 3: The check messages are calculated. The first check node is joined to the
first, second, and third bit nodes B1 ¼ f1; 2; 3g and so that the messages
from the first check node are
8.6 LDPC Decoding 277

E1;1 ¼ M2  M3 ¼ 0  1 ¼ 1;
E1;2 ¼ M1  M3 ¼ 0  1 ¼ 1;
E1;3 ¼ M1  M2 ¼ 0  0 ¼ 0;

The second check includes the first, fourth, and fifth bits, B2 ¼ f1; 4; 5g and so the
messages from the second check are

E2;1 ¼ M4  M5 ¼ 1  0 ¼ 1;
E2;4 ¼ M1  M5 ¼ 0  0 ¼ 0;
E2;5 ¼ M1  M4 ¼ 0  1 ¼ 1:

The third check includes the second, fourth, and sixth bits, B3 ¼ f2; 4; 6g, and so
the messages from the second check are

E3;2 ¼ M4  M6 ¼ 1  1 ¼ 0;
E3;4 ¼ M2  M6 ¼ 0  1 ¼ 1;
E3;6 ¼ M2  M4 ¼ 0  1 ¼ 1;

The fourth check includes the third, fifth, and sixth bits, B4 ¼ f3; 5; 6g, and so the
messages from the second check are

E4;3 ¼ M5  M6 ¼ 0  1 ¼ 1;
E4;5 ¼ M3  M6 ¼ 1  1 ¼ 0:
E4;6 ¼ M3  M5 ¼ 1  0 ¼ 1:

Step 4: The first bit has messages 1 and 1 from the first and second checks,
respectively, and 0 from the channel. Thus, the majority of the messages
into the first bit node indicate a value different from the received value.
The second bit has messages 1 and 0 from the first and third checks,
respectively, and 0 from the channel, so it retains its received value. The
third bit has messages 0 and 1 from the first and fourth checks, respec-
tively, and 1 from the channel, so it retains its received value. The fourth
bit has messages 0 and 1 from the second and third checks, respectively,
and 1 from the channel, so it retains its received value. The fifth bit has
messages 1 and 0 from the second and fourth checks, respectively, and 0
from the channel, so it retains its received value. The sixth bit has mes-
sages 1 and 1 from the third and fourth checks, respectively, and 1 from
the channel, so it retains its received value. Thus, the majority of the
messages into the first bit node indicate a value different from the received
value.
278 8 Low Density Parity Check Codes

Step 5: Hence, the first bit node flips its value. The new bit node to check node
messages is thus given by

M ¼ ½1 0 1 1 0 1 :

Step 6: Compute s ¼ ðMH T Þ mod 2


0 2 31
1 1 0 0
B 61 0 1 07 C
B 6 7C
B 61 0 0 17 C
s¼B
B½ 1 0 1 1 0 1 6
60
7Cmod 2 ¼ ½ 0 0 0 0
B 6 1 1 07 C
7C
@ 40 1 0 1 5A
0 0 1 1

there are thus no unsatisfied parity check equations, and so the algorithm halts and
returns

^c ¼ M ¼ ½ 1 0 1 1 0 1

as the decoded code word. The received vector has therefore been correctly decoded
without requiring an explicit search over all possible code words. Hence the process
is stopped.

8.6.4 Bit-Flipping Decoding Using MATLAB

The following example illustrates the bit-flipping decoding of LDPC codes using
MATLAB
Example 8.11 Write a MATLAB program to implement bit flipping decoding by
2 received vector y 3¼ ½0 1 1 0 1 1 when the following parity check matrix
assuming
1 1 0 1 0 0
60 1 1 0 1 07
H¼6 7
4 1 0 0 0 1 1 5 is used to encode the code word.
0 0 1 1 0 1
Solution The following MATLAB program decodes the received vector y.
8.6 LDPC Decoding 279

Program 8.6 MATLAB program for Bit Flipping Decoding of LDPC Codes

The output of the above program gives the decoded vector

^c ¼ yd ¼ ½ 0 0 1 0 1 1

Example 8.12 A valid code word is c ¼ ½0 0 1 0 0 1 for the following parity check
matrix
2 3
1 1 0 1 0 0
61 1 0 0 1 07
H¼6
40
7
0 1 0 1 15
0 0 1 1 0 1

If the code word is transmitted over AWGN channel, the received vector after
detector hard decision is y ¼ ½1 0 1 0 0 1. Decode the received vector by bit-flipping
using MATLAB and comment on the result.
280 8 Low Density Parity Check Codes

Solution The Program 8.6 is run with the H matrix and the received vector. The
output of the program gives the decoded vector ^c ¼ yd ¼ ½0 1 1 0 0 1. The received
vector is not decoded correctly due to the girth 4 in the H matrix.

8.7 Sum–Product Decoding

The sum–product algorithm is similar to the bit-flipping algorithm as described in


the previous section, but the messages representing each decision (whether the bit
value is 1 or 0) are now probabilities. Bit-flipping decoding accepts an initial hard
decision on the received bits as input, and the sum–product algorithm is a soft-
decision message-passing algorithm which accepts the probability of each received
bit as input. The input channel or received bit probabilities are known in advance
before the LDPC decoder was operated, and so they are also called as the a priori
probabilities of the received bit. In the sum–product decoder, the extrinsic infor-
mation passed between nodes is also probabilities. The extrinsic information
between check node j and bit node i is denoted by Ej;i . The Ej;i gives the probability
for the bit ci to be 1 that causes the parity check equation j is satisfied. The Ej;i
cannot be defined if the bit i is not included in j as there will be no extrinsic
information between check node j and bit node i.
The probability that an odd number of the bits in that parity check equation are
1s is given by

1 1 Y  
j;i ¼
Pext  1  2Pj;i0 ð8:26Þ
2 2 i0 2B i0 6¼i
j;

which is the probability that a parity check equation is satisfied for the bit ci to be 1.
The probability that the parity check equation is satisfied for the bit ci to be 0
becomes 1  Pextj;i .
The metric for a binary variable is represented by the following log likelihood
ratio (LLR)

pð x ¼ 0Þ
Lð xÞ ¼ log ð8:27Þ
pð x ¼ 1Þ

where by log we mean loge. The sign of Lð xÞ provides a hard decision on x and
magnitude jLð xÞj is the reliability of this decision. Translating from LLRs back to
probabilities,

eLðxÞ
pðx ¼ 1Þ ¼ ð8:28Þ
1 þ eLðxÞ
8.7 Sum–Product Decoding 281

eL ð xÞ
pðx ¼ 0Þ ¼ ð8:29Þ
1 þ eLðxÞ

when probabilities need to be multiplied, LLRs need only be added and by this the
complexity of the sum–product decoder is reduced. This makes the benefits of the
logarithmatic representation of probabilities. The extrinsic information from check
node j to bit node i is expressed as a LLR,
 1  Pext
j;i
Ej;i ¼ L Pext
j;i ¼ log ð8:30Þ
Pext
j;i

Now
Q  
1
þ 12 i0 2Bj ;i0 6¼i 1  2Pj;i0
Ej;i ¼
2
log 1 Q  
2  12 i0 2Bj ;i0 6¼i 1  2Pj;i0


Q eMM
j;i0
1þ 1
i0 2B  2
;i0 6¼i 0
j
1 þ e j;i
¼ log Q  Mj;i0
ð8:31Þ
1  i0 2Bj ;i0 6¼i 1  2 e Mj;i0
1þe
Q  M

1  e j;i0
1 þ i0 2Bj ;i0 6¼i M 0
1 þ eM 0
j;i
¼ log Q 1  e j;i
1  i0 2Bj ;i0 6¼i M
1 þ e j;i0

  1P 0
where Mj;i0 , L Pj;i0 ¼ log P 0j;i :
j;i

Using the relationship




1 1p
tanh log ¼ 1  2p ð8:32Þ
2 p

gives
Q  
1þ tanh Mj;i0 =2
i0 2Bj ;i0 6¼i
Ej;i ¼ log Q   ð8:33Þ
1  i0 2Bj ;i0 6¼i tanh Mj;i0 =2

Alternatively, using the relationship

1þp
2 tanh1 p ¼ log ð8:34Þ
1p
282 8 Low Density Parity Check Codes

Then,
Y  
Ej;i ¼ 2 tanh1 tanh Mj;i0 =2 ð8:35Þ
i0 2Bj ;i0 6¼i

The above equation is numerically challenging due to the presence of the


product of the tanh and tanh−1 functions. Following Gallager, we can improve the
situation as follows. First, factor Mji into its sign and magnitude (or bit value and bit
reliability);

Mji ¼ aji bji ð8:36Þ


 
aji ¼ sign Mji ð8:36aÞ

bji ¼ Mji ð8:36bÞ

So that Eq. (8.35) may be rewritten as



Y Y

1 1
tanh Mji ¼ aji0  tanh bji0 ð8:37Þ
2 i0 i0 2B ;i0 6¼i
2
j

We then have

Y Y
!
1 1
Eji ¼ aji0  2 tanh tanh bji0
i0 i0
2
Y
! Y
1
¼ aji0  2 tanh1 log1 log tanh bji0 ð8:38Þ
i0 i0
2
Y X


1
¼ aji0  2 tanh1 log1 log tanh bji0
i0 i0
2

This yields a new form for Eq. (8.38) as


!
Y X  
Eji ¼ aji0  / / bji0 ð8:39Þ
i0 i0

where /ð xÞ is defined as

x
e þ1
/ð xÞ ¼  log½tanhðx=2Þ ¼ log ð8:40Þ
ex  1

Using the fact that /1 ð xÞ ¼ /ð xÞ when x [ 0:


8.7 Sum–Product Decoding 283

Each bit node has access to the input LLR, Li , and to the LLRs from every
connected check node. The total LLR of the ith bit is the sum of these LLRs:
X
Ltotal
i ¼ Li þ Eji ð8:41Þ
j2Ai

The hard decision on the received bits is simply given by the signs of the Ltotal
i .
Check whether the parity check equations are satisfied (thus, ^cH T ¼ 0 is also a
stopping criterion for sum–product decoding); if not satisfied, update Mji
X
Mji ¼ Ej0 i þ Li ð8:42Þ
j0 2Ai ;j0 6¼j

The algorithm outputs the estimated a posteriori bit probabilities of the received
bits as LLRs.
The sum–product decoder immediately stops whenever a valid code word has
been found by a checking of whether the parity check equations are satisfied (i.e.,
^cH T ¼ 0) or allowed maximum number of iterations achieved. The decoder is
initialized by setting all VN messages Mji equal to


Prðci ¼ 0jyi Þ
Li ¼ Lðci jyi Þ ¼ log ð8:43Þ
Prðci ¼ 1jyi Þ

For all j; i for which hij ¼ 1. Here, yj represents the channel value that was
actually received, that is, it is not a variable here. The Li for different channels can
be computed as [10].
BEC
In this case, yj 2 f0; 1; eg
8
< þ1 yj ¼ 0;
Li ¼ Lðci jyi Þ ¼ 1 yj ¼ 1; ð8:44Þ
:
0 yj ¼ e:

BSC
In this case, yj 2 f0; 1g; we have


1P yj
Li ¼ Lðci jyi Þ ¼ ð1Þ log ð8:45Þ
P

The knowledge of crossover probability P is necessary.


284 8 Low Density Parity Check Codes

BI-AWGNC
The ith received sample is yi ¼ xi þ ni where the ni are independent and normally
distributed as N ð0; r2 Þ. r2 ¼ N20 where N0 is the noise density.
Then, we can easily show that

1
Prðxi ¼ xjyi Þ ¼ ð8:46Þ
1 þ expð4yi x=N0 Þ

where x 2 f1g and, from this, that

Lðci jyi Þ ¼ 4yi =N0 ð8:47Þ

An estimate of N0 is necessary in practice.


Rayleigh
The model for Rayleigh fading channel is similar to that of the AWGNC: yi ¼
ai xi þ ni where fai g are independent Rayleigh random variable with unity variance.
The channel transition probability can be expressed by

1
Pðxi ¼ xjyi Þ ¼
1 þ expð4ai yi x=N0 Þ

Then,

Lðci jyi Þ ¼ 4ai yi =N0 ð8:48Þ

The estimates of ai and r2 are necessary in practice.

Now, the stepwise procedure for the log domain sum–product algorithm is given
in the following Sect. 8.8.

8.7.1 Log Domain Sum–Product Algorithm (SPA)

Step 1: Initialization: for all i, initialize Li according to Eq. (8.44) for the appro-
priate channel model. Then, for all i; j for which hi;j ¼ 1 set Mji ¼ Li ; and
l = 0. Define Bj to represent the set of bits in the jth parity check equation
of H and Ai to represent the parity check equations for the ith bit of the
code.
Step 2: CN update: compute outgoing CN message Eji for each CN using
Eqs. (8.36), (8.39), and (8.40).
8.7 Sum–Product Decoding 285

Mji ¼ aji bji


 
aji ¼ sign Mji ;

bji ¼ Mji
!
Y X  
Eji ¼ aji0  / / bji0
i0 i0

x
e þ1
/ð xÞ ¼  log½tanhðx=2Þ ¼ log x
e 1

Step 3: LLR total: For i ¼ 0; 1; . . .; N  1 compute total LLR using Eq. (8.41)
X
Ltotal
i ¼ Li þ Eji
j2Ai

Step 4: Stopping criteria: For i ¼ 0; 1; . . .; N  1, set



1 if Ltotal \0;
c^i ¼ i
0 else,

To obtain ^c. If ^cH T ¼ 0 or the number of iterations equals the maximum


limit (l = lmax,) stop;
else
Step 5: VN update: compute outgoing VN message Mji for each VN using
Eq. (8.42) P
Mji ¼ Li þ Ej0 i  l ¼ l þ 1 go to Step 2
j0 2Ai ;j0 6¼j

8.7.2 The Min-Sum Algorithm

Consider Eq. (8.39) for Eji . It can be noted from the shape of /ðxÞ that the largest
term in the sum corresponds to the smallest bji : Hence, assuming that this term
dominates the sum, the following relation is obtained [10]
!
X     
/ / bji0 ’ / / mini0 bji0 ¼ mini0 bji0 ð8:49Þ
i0
286 8 Low Density Parity Check Codes

Thus, the min-sum algorithm is simply the log domain SPA with Step 2 replaced by

Mji ¼ aji bji


 
aji ¼ sign Mji ;

bji ¼ Mji
Y
Eji ¼ aji0  min
0
bji0
i
i0

It can also be shown that, in the AWGNC case, the initialization Mji ¼ 4yi =N0
may be replaced by Mji ¼ yi when the simplified log domain sum–product algo-
rithm is employed. The advantage, of course, is that an estimate of the noise power
N0 is unnecessary in this case.

Example 8.13 A code word generated using the parity check matrix H ¼

2 3
1 1 1 0 0 0
61 0 0 1 1 07
6 7
4 0 1 0 1 0 1 5 is sent through AWGN channel with No = 0.3981, the
0 0 1 0 1 1
received vector is y = [−0.9865 0.3666 0.4024 0.7638 0.2518 −1.6662]. Decode
the received vector using the sum–product algorithm.
Solution
y
L ¼ 4 ¼ ½ 9:9115 3:6830 4:0430 7:6738 2:5295 16:7415 
N0

To begin decoding, we set

Mj;i ¼ Li

The first bit is included in the first and second checks, and so M1;1 and M2;1 are
initialized to L1 :

M1;1 ¼ L1 ¼ 9:9115 and M2;1 ¼ L1 ¼ 9:9115:

Repeating this for the remaining bits gives,


For i = 1, M1;2 ¼ L2 ¼ 3:6830; M3;2 ¼ R2 ¼ 3:6830;
For i = 2, M1;3 ¼ R3 ¼ 4:0430; M4;3 ¼ R3 ¼ 4:0430;
For i = 4, M2;4 ¼ R4 ¼ 7:6738; M3;4 ¼ R4 ¼ 7:6738;
For i = 5, M2;5 ¼ R5 ¼ 2:5295; M4;5 ¼ R5 ¼ 2:5295;
For i = 6, M3;6 ¼ R6 ¼ 16:7415; M4;6 ¼ R6 ¼ 16:7415;
8.7 Sum–Product Decoding 287

Now the extrinsic probabilities are calculated for the check to bit messages, the
first parity check includes the first, second, and fourth bits, and so the extrinsic
probability from the first check node to the first bit node depends on the proba-
bilities of the second and fourth bits:
     
1 þ tanh M1;2 2 tanh M1;3 2
E1;1 ¼ log      
1  tanh M1;2 2 tanh M1;3 2
1 þ tanhð3:6830=2Þ tanhð4:0430=2Þ
¼ log ¼ 3:1542
1  tanhð3:6830=2Þ tanhð4:0430=2Þ

Similarly, the extrinsic probability from the first check node to the second bit
node depends on the probabilities of the first and fourth bits:
     
1 þ tanh M1;1 2 tanh M1;3 2
E1;2 ¼ log      
1  tanh M1;1 2 tanh M1;3 2
1 þ tanhð9:9115=2Þ tanhð4:0430=2Þ
¼ log       ¼ 4:0402
1  tanh M1;1 2 tanh M1;3 2

And the extrinsic probability from the first check node to the 4th bit node
depends on the LLRs sent from the first and second bit nodes to the first check
node:
     
1 þ tanh M1;1 2 tanh M1;2 2
E1;3 ¼ log      
1  tanh M1;1 2 tanh M1;2 2
1 þ tanhð9:9115=2Þ tanhð3:6830=2Þ
¼ log ¼ 3:681
1  tanhð9:9115=2Þ tanhð3:6830=2Þ

Next, the second check node connects to the second, third, and fifth bit nodes
and so the extrinsic LLRs are
     
1 þ tanh M2;4 2 tanh M2;5 2
E2;1 ¼ log      
1  tanh M2;4 2 tanh M2;5 2
1 þ tanhð7:6738=2Þ tanhð2:5295=2Þ
¼ log ¼ 2:5237
1  tanhð7:6738=2Þ tanhð2:5295=2Þ
     
1 þ tanh M2;1 2 tanh M2;5 2
E2;4 ¼ log      
1  tanh M2;1 2 tanh M2;5 2
1 þ tanhð9:9115=2Þ tanhð2:5295=2Þ
¼ log ¼ 2:5289
1  tanhð9:9115=2Þ tanhð2:5295=2Þ
     
1 þ tanh M2;1 2 tanh M2;4 2
E2;5 ¼ log      
1  tanh M2;1 2 tanh M2;4 2
1 þ tanhð9:9115=2Þ tanhð7:6738=2Þ
¼ log ¼ 7:5724
1  tanhð9:9115=2Þ tanhð7:6738=2Þ
288 8 Low Density Parity Check Codes

Similarly for the remaining CNs

E3;2 ¼ 7:6737; E3;4 ¼ 3:6830; E3;6 ¼ 3:6647;


E4;3 ¼ 2:5295; E4;5 ¼ 4:0430; E4;6 ¼ 4:0430:

To check for a valid code word, we calculate the estimated posterior probabil-
ities for each bit, make a hard decision and check the syndrome s. The first bit has
extrinsic LLRs from the first and second checks and an intrinsic LLR from the
channel the total LLR is their sum:

L1 ¼ L1 þ E1;1 þ E2;1 ¼ 9:9115 þ 3:1542 þ 2:5237 ¼ 15:5894:

Thus even though the LLR from the channel is negative, indicating that the first
bit is a 1, both the extrinsic LLRs are positive, indicating that the bit is 0. The
extrinsic LLRs are large enough to make the total LLR positive, and so the decision
on the first bit has effectively been changed. Repeating for the second to sixth bits
gives:

L2 ¼ L2 þ E1;2 þ E3;2 ¼ 3:6830  4:0402  7:6737 ¼ 15:3969;


L3 ¼ L3 þ E1;3 þ E4;3 ¼ 4:0430  3:681  2:5295 ¼ 10:2535;
L4 ¼ L4 þ E2;4 þ E3;4 ¼ 7:6738  2:5289  3:6830 ¼ 13:8857;
L5 ¼ L5 þ E2;5 þ E4;5 ¼ 2:5295  7:5724  4:0430 ¼ 14:1449;
L6 ¼ L6 þ E3;6 þ E4;6 ¼ 16:7415 þ 3:6647  4:0430 ¼ 16:3632:

The hard decision on the LLRs gives

^c ¼ ½ 0 1 1 1 1 0 :

To check whether ^c is a valid code word, consider


2 3
1 1 0 0
61 0 1 07
6 7
61 0 0 17
s ¼ ^cH 0 ¼ ½ 0 1 1 1 1 0 6
60
7 ¼ ½0 0 0 0
6 1 1 077
40 1 0 15
0 0 1 1

The decoding stops because s ¼ 0 and the returned c is a valid code word.
8.7 Sum–Product Decoding 289

8.7.3 Sum–Product and Min-Sum Algorithms for Decoding


of Rate 1/2 LDPC Codes Using MATLAB

The following MATLAB program and functions are written and used to decode the
rate ½ LDPC codes using sum–product and min-sum algorithms for different SNRs.

Program 8.7 MATLAB program for LDPC decoding using log domain sum-
product algorithm
290 8 Low Density Parity Check Codes
8.7 Sum–Product Decoding 291

Step 2 of min-sum algorithm

The MATLAB function min-sum is same as the log sum–product function


program with the Step 2 in logsumproduct is replaced by the following MATLAB
program segment has yielded.
For example, consider the parity check matrix of Example 8.11. c ¼ ½0 1 1 1 1 0
is a valid code word for the parity check matrix. When this code word is sent over
an AWGN channel at NEbo = 2 dB, decoding of the received vector using the above
MATLAB program and functions has yielded ^c ¼ ½0 1 1 1 1 0.

8.8 EXIT Analysis of LDPC Codes

8.8.1 Degree Distribution

An irregular parity check matrix of LDPC codes has columns and rows with
varying weights, i.e., a Tanner graph has bit nodes and CNs with varying degrees.
Let Dv be the number of different variable node degrees, Dc be the number of
different check node degrees. Then, the following functions can be defined as
292 8 Low Density Parity Check Codes

X
Dv
kð x Þ ¼ ki xi1 ¼ k2 x þ k3 x2 þ    þ kDv xDv 1 ð8:50Þ
i¼2

X
Dc
qðxÞ ¼ qi xi1 ¼ q2 x þ q3 x2 þ    þ qDc xDc 1 ð8:51Þ
i¼2

where ki is the fraction of edges that are connected to degree-i variable (bit) nodes,
and qi is the fraction of edges that are connected degree-i CNs. It should be noted
that ki and qi must satisfy that

X
Dv X
Dc
ki ¼ 1 and qi ¼ 1
i¼2 i¼2

The code rate can be expressed as


R1
qðxÞdx
r ¼ 1  R01 ð8:52Þ
0 kðxÞdx

Example 8.14 Find degree distribution of the following irregular code parity check
matrix

2 3
0 1 0 0 1 0 1 1 0 0
61 0 1 1 0 1 0 0 0 17
6 7
H¼6
60 1 0 0 1 0 0 1 1 077:
41 0 0 0 1 0 1 0 0 15
0 1 1 1 0 1 0 0 1 0

Solution k2 = the fraction of edges connected to degree 2 bit nodes ¼ 16


22 ¼ 0:7273
k3 = the fraction of edges connected to degree 3 bit nodes ¼ 226
¼ 0:2727
q4 = the fraction of edges connected to degree 4 CNs ¼ 12 22 ¼ 0:5455
q5 = the fraction of edges connected to degree 5 bit nodes ¼ 1022 ¼ 0:4545

Thus, the irregular code has degree distribution

kð xÞ ¼ 0:7273x þ 0:2727x2
qð xÞ ¼ 0:5455x4 þ 0:4545x5
8.8 EXIT Analysis of LDPC Codes 293

8.8.2 Ensemble Decoding Thresholds

The decoding threshold in terms of the noise standard deviation ðr Þ of a given


degree distribution for iterative sum–product or min-sum decoding is defined as the
supreme of the channel noises for which the probability of decoding error goes to
zero as the number of iterations tends to infinity. Thus, the threshold can be
expressed as


r ¼ sup r [ 0 : lim pi ðrÞ ¼0 ð8:53Þ
i!1 b

If r
r ; pib ðrÞ converges to zero, otherwise converges to a value greater than
zero.
The stability condition for AWGN channel is given by [11]


0 0 1
k ð0Þq ð1Þ\ exp ð8:54Þ
2r2

whereas the stability condition for uncorrelated Rayleigh fading channel with SI is
given by

1
k0 ð0Þq0 ð1Þ\1 þ ð8:55Þ
2r2

The threshold value r and the maximum allowed value rmax and the corre-
sponding NEbo s on the binary AWGN channel for various regular code parameters are
given in Table 8.1 [12].

Table 8.1 Thresholding values on binary AWGN channel for Various regular code parameters
 
dv dc Rate r Eb
dB rmax Eb
dB
No No max
3 6 0.5 0.88 1.1103 0.979 0.1843
3 5 0.4 1.0 0 1.148 −1.1988
3 4 0.25 1.26 −2.0074 1.549 −3.8010
4 8 0.5 0.83 1.6184 0.979 0.1843
4 6 0.333 1.01 −0.0864 1.295 −2.2454
5 10 0.5 0.79 2.0475 0.979 0.1843
294 8 Low Density Parity Check Codes

8.8.3 EXIT Charts for Irregular LDPC Codes in Binary


Input AWGN Channels
 
Under the consistent Gaussian assumption, the mutual information IA;V between
the VN (a priori) inputs and the code bit associated with that VN can be computed
by using the following approximation [11] for Eq. (6.20b). Thus,
8
< 0:0421061r3 þ 0:209252r2  0:00640081r 0
r\1:6363
IA;V ¼ J ðrÞ ¼ 1  expð0:00181491r3  0:142675r2  0:0822054r þ 0:0549608Þ 1:6363
r\10
:
1 r  10
ð8:56Þ
 
The approximation for inverse function r ¼ J 1 IA;V is
pffiffiffiffiffiffiffiffi
2
1:09542IA;V þ 0:214217IA;V þ 2:33727 IA;V 0
IA;V
0:3646
r ¼ J 1 ðIA;V Þ  
0:706692loge 0:386013 1  IA;V  1:75017IA;V 0:3646IA;V \1
ð8:57Þ

Using J ðrÞ, the EXIT chart of an irregular code IE;V describing the variable node
function can be computed as follows.

X
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Dv
  2
IE;V ¼ ki J ði  1Þ J 1 IA;V þr2ch ð8:58Þ
i¼2

where i is the variable node degree, IA;V is the mutual information of the message
entering the variable node with the transmitted code word, r2ch ¼ 8R NEb0 .
The EXIT chart of an irregular code IE;C describing the check node function can
be computed as follows:

X
Dc  pffiffiffiffiffiffiffiffiffiffiffiffiffiffi  
IE;C ¼ qi 1  J ði  1ÞJ 1 1  IA;C ð8:59Þ
i¼2

where i is the check node degree, IA;C is the mutual information of the message
entering the check node with the transmitted code word.
In order for the decoding to converge to a vanishingly small probability of error,
the EXIT chart of the VN has to lie above the inverse of the EXIT chart for the CNs.
Example 8.15 Consider the following rate 1/2 irregular LDPC codes with good
degree distributions for a binary AWGN channel given in [13].
8.8 EXIT Analysis of LDPC Codes 295

Code 1

kð xÞ ¼ 0:33241x þ 0:24632x2 þ 0:11014x3 þ 0:31112x5


qð xÞ ¼ 0:76611x5 þ 0:234389

Eb
with a decoding EXIT threshold of No = 0.6266 dB.
Code 2

kðxÞ ¼ 0:19606x þ 0:24039x2 þ 0:00228x5 þ 0:05516x6


þ 0:16602x7 þ 0:04088x8 þ 0:01064x9 þ 0:00221x27 þ 0:28636x29
qð xÞ ¼ 0:00749 þ 0:99101x8 þ 0:0015

Eb
with a decoding EXIT threshold of No = 0.2735 dB.

The EXIT charts of the two codes are shown in Fig. 8.7. From Fig. 8.7, it can be
observed that the code 2 with lower threshold has better fit between variable node
and check node EXIT curves.
The good degree distributions for rate 1/2 and 1/3 irregular LDPC codes for
uncorrelated Rayleigh fading channels can be found in [14].

0.9
-- Code1
0.8
- Code2
0.7

0.6
IEV, IAC

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
IAV, IEC

Fig. 8.7 EXIT charts for two irregular LDPC codes with different degree distributions on a binary
AWGN channel
296 8 Low Density Parity Check Codes

0
10
Sum-Product decoding algorithm
Min-Sum decoding algorithm
BER

-1
10

-2
10
0 0.5 1 1.5 2 2.5 3
Eb/No (dB)

Fig. 8.8 BER performance of sum–product and min-sum decoding algorithms

8.9 Performance Analysis of LDPC Codes

8.9.1 Performance Comparison of Sum–Product and Min-


Sum Algorithms for Decoding of Regular LDPC Codes
in AWGN Channel

The BER performance of the sum–product and min-sum LDPC decoding algo-
rithms is evaluated through a computer simulation assuming that the channel adds
white Gaussian noise to the code generated by a (256, 3, 6) regular parity check
matrix. In this simulation, four hundred frames of each of length 256 and three
iterations are used. The BER performance of the sum–product and min-sum
algorithms is shown in Fig. 8.8.

8.9.2 BER Performance Comparison of Regular


and Irregular LDPC Codes in AWGN Channel

The performance of rate 1/2 regular and irregular codes having the same length is
evaluated through a computer simulation. The BER performance of the two codes is
shown in Fig. 8.9.
8.9 Performance Analysis of LDPC Codes 297

0
10
Rate 1/2 (256,3,6) regular
Rate 1/2 irregular,
-1
10

-2
10
BER

-3
10

-4
10

-5
10
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Eb/No (dB)

Fig. 8.9 BER performance of rate 1/2 regular and irregular LDPC codes using min-sum decoding
algorithms

From Fig. 8.9, it is observed that there is no significant difference between the
BER performance of the sum–product and the min-sum algorithms.
The irregular codes can have improved thresholds for long codes but with an
error floor at higher BER than for regular codes of the same rate and length.

8.9.3 Effect of Block Length on the BER Performance


of LDPC Codes in AWGN Channel

The effect of block length on the performance of LDPC codes is illustrated through
a computer simulation. In this experiment, two 1/2 rate irregular codes of block
lengths 256 and 512 are considered and added white Gaussian noise to them, and
the noisy codes are decoded using min-sum decoding algorithm with 10 iterations.
The BER performance of the two codes is shown in Fig. 8.10.
298 8 Low Density Parity Check Codes

0
10
128x256, iterations=10
256x512, iterations=10

-1
10
BER

-2
10

-3
10
0 0.5 1 1.5 2 2.5 3
Eb/No (dB)

Fig. 8.10 BER performance of two 1/2 rate irregular LDPC codes using min-sum logarithm for
decoding in AWGN channel

8.9.4 Error Floor Comparison of Irregular LDPC Codes


of Different Degree Distribution in AWGN Channel

The error floor of an LDPC code is characterized by the phenomenon that as the SNR
continues to increase, the error probability suddenly drops at a rate much slower
than that in the region of low-to-moderate SNR can be approximated by [15].
 0 2
rffiffiffiffiffiffiffiffiffiffiffi
2 k2 qð1Þ 4REb
BERef Q ð8:60Þ
N 4 No
 
with the constraint k2 qð1Þ0
E exp 2r1 2 : where E varies from 0 to 1, E ¼ 1 for the
traditional optimized degree distributions, E is greater than zero but less than 1 for
constrained degree distributions, N is the length of the code and R is the code rate.
A trade-off between the threshold and error floor can be achieved with the con-
strained distributions.
Example 8.16 Consider the following rate 1/4 irregular LDPC codes with optimal
degree distribution and constrained degree distributions given in [15].
8.9 Performance Analysis of LDPC Codes 299

Code 1: Traditional code with optimal degree distribution with E ¼ 1.

kð xÞ ¼ 0:431x þ 0:2203x2 þ 0:0035x3 þ 0:0324x5 þ 0:1587x6 þ 0:1541x9


qð xÞ ¼ 0:0005x2 þ 0:9983x3 þ 0:0012x4

Code 2: Code with constrained degree distribution with E ¼ 0:19.

kð xÞ ¼ 0:0872x þ 0:865x2 þ 0:0242x3 þ 0:0032x5 þ 0:0027x6 þ 0:0127x9


qð xÞ ¼ 0:0808x2 þ 0:8945x3 þ 0:0247x4

Code 3: Code with constrained degree distribution with E ¼ 0:02:

kð xÞ ¼ 0:0086x þ 0:9711x2 þ 0:0006x3 þ 0:0059x5 þ 0:011x6 þ 0:0028x9


qð xÞ ¼ 0:0118x2 þ 0:9332x3 þ 0:055x4

The error floor BER of the three codes is evaluated using Eq. (8.60) and shown
in Fig. 8.11 along with the error floor region.
From Fig. 8.11, it can be observed that the codes with constrained degree
distributions have yielded improved error floor performance. It indicates that a
balance between threshold and error floor BER can be obtained.

0
10
Code 1
Code 2
-2 Code 3
10

-4
10
BER

-6
10

-8
10

10-10
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Eb/No, dB

Fig. 8.11 The error floor BER of the three codes


300 8 Low Density Parity Check Codes

8.10 Problems

1. Plot the Tanner graph for the following parity check matrix H. Show that the
girth of the Tanner graph is 6
2 3
1 1 0 1 0 0 0
60 1 1 0 1 0 07
6 7
60 0 1 1 0 1 07
6 7
H¼6
60 0 0 1 1 0 177
61 0 0 0 1 1 07
6 7
40 1 0 0 0 1 15
1 0 1 0 0 0 1

2. Find the girth of the Tanner graph given below

Check nodes

Bit nodes

2 3
0 0 1 0 0 1 1 1 0 0 0 0
61 1 0 0 1 0 0 0 0 0 0 17
6 7
60 0 0 1 0 0 0 0 1 1 1 07
6 7
60 1 0 0 0 1 1 0 0 1 0 07
6 7
H¼6
61 0 1 0 0 0 0 1 0 0 1 077
60 0 0 1 1 0 0 0 1 0 0 17
6 7
61 0 0 1 1 0 1 0 0 0 0 07
6 7
40 0 0 0 0 1 0 1 0 0 1 15
0 1 1 0 0 0 0 0 1 1 0 0

3. Determine the code word for LDPC code with following parity check matrix
using efficient encoding method when the message sequence s ¼ ½1 0 0 0 0 0 0
2 3
1 1 1 0 0 1 1 0 0 0 1 0
61 1 1 1 1 0 0 0 0 0 0 17
6 7
60 0 0 0 0 1 1 1 0 1 1 17
H¼6
61
7
6 0 0 1 0 0 0 1 1 1 0 177
40 1 0 1 1 0 1 1 1 0 0 05
0 0 1 0 1 1 0 0 1 1 1 0
8.10 Problems 301

4. A code word is generated using the following parity check matrix


2 3
1 1 0 1 0 0
60 1 1 0 1 07
H¼6
41
7
0 0 0 1 15
0 0 1 1 0 1

When the code word is sent through a BEC, the received signal is

y ¼ ½0 0 1 e e e

Decode the received vector to recover the erased bits.


5. Consider the code word generated in Example 8.5. If it is sent through AWGN
channel, the received vector after detector hard decision is
y ¼ ½1 1 0 1 0 1 0 0 0 1. Decode the received vector using bit-flipping algorithm
and comment on the result.
6. A code word is generated with following parity check matrix
2 3
1 1 1 0 0 0
61 0 0 1 1 07
H¼6
40
7
1 0 1 0 15
0 0 1 0 1 1

When the code word is sent through a BSC with crossover probability  ¼ 0:2;
the received signal is

y ¼ ½0 0 1 0 0 0

Decode the received vector using log domain sum–product algorithm.


7. Consider the code word generated in Example 8.6. If it is sent through AWGN
channel with noise density No = 0.3981, the received vector is y ¼ ½0:7271
2:0509  0:9209  0:8185 0:2766  0:2486  0:2497  1:0237 1:2065
1:1102. Decode the received vector using sum–product algorithm and com-
ment on the result.
8. Repeat the problem 6 using min-sum decoding algorithm and comment on the
result
9. Find the degree distribution of the following irregular code parity check matrix
302 8 Low Density Parity Check Codes

2 3
0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0
60
6 1 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 177
60
6 0 1 0 0 0 0 1 1 1 0 0 0 1 0 0 0 1 0 177
60
6 0 1 0 0 0 1 1 1 0 0 1 0 0 0 1 0 0 1 077
60 1 0 0 0 1 0 0 0 0 1 1 1 1 0 0 0 1 0 177
H¼6
60
6 0 0 0 1 1 1 0 0 0 0 0 0 1 0 1 1 1 1 077
61
6 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 1 0 0 077
60
6 1 0 0 1 0 0 0 0 1 1 0 1 0 1 0 0 0 0 077
41 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 05
1 0 1 1 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0

10. Obtain an EXIT chart for a rate 0:49303 irregular LDPC code with the fol-
lowing degree distribution at r2 ¼ 0:97869:

kð xÞ ¼ 0:135x þ 0:2816x2 þ 02576x3 þ 0:0867x33


qð xÞ ¼ x10

8.11 MATLAB Exercises

1. Write MATLAB program to generate parity check matrix H having a normal-


ized degree distribution (from node perspective) defined as

X
kmax X
qmax
kð x Þ ¼ ki x i
and qð xÞ ¼ qi xi ;
i¼1 i¼1

with k ¼ ½ 0 0:4994 0:3658 0 0 0 0:0581 0 0 0 0 0 0:0767 ;


q ¼ ½ 0 0 0 0 0 0 1 ;
2. Write a MATLAB program to compare the performance of LDPC codes in
AWGN and Rayleigh fading channels.

References

1. Gallager, R.G.: Low-density parity-check codes. IRE Trans. Inf. Theor. 21–28 (1962)
2. Fan, J.L.: Array codes as low-density parity-check codes. In: Proceedings of 2nd International
Symposium on Turbo Codes and Related Topics, Brest, France, 4–7 Sept 2000, pp. 543–546
3. Honary, B., et al.: On construction of low density parity check codes. In: 2nd International
Workshop on Signal Processing for Wireless Communication (SPWC 2004), London, UK,
2–4 June 2004
4. Ammer, B., Honary, B., Xu, Y., Lin, S.: Construction of low- density parity-check codes on
balanced incomplete block designs. IEEE Trans. Inf. Theor. 50(6), 1257–1268 (2004)
5. Miladinovic, N., Fossorier, M.: Systematic recursive construction of LPDC codes. IEEE
Commun. Lett. 8(5), 302–304 (2004)
References 303

6. Johnson, S.J.: Iterative Error Correction Turbo, Low- Density Parity-Check and Repeat-
Accumulate Codes. Cambridge University Press, Cambridge (2010)
7. Xiao, Y., Lee, M.-H.: Low complexity MIMO-LDPC CDMA systems over multipath
channels. IEICE Trans. Commun. v E89-B(5), 1713–1717 (2006)
8. MacKay, D.J.C., Neal, R.M.: Near Shannon limit performance of low density parity check
codes. Electron. Lett. 33, 457–458 (1997)
9. Richardson, T., Urbanke, R.: Efficient encoding of low-density parity-check codes. IEEE
Trans. Inf. Theor. 47(2), 638–656 (2001)
10. Ryan, W.E., Lin, S.: Channel Codes Classic and Modern. Cambridge University Press,
Cambridge (2009)
11. ten Brink, S., Kramer, G., Ashikhmin, A.: Design of low-density parity-check codes for
modulation and detection. IEEE Trans. Commun. 52(4), 670–678 (2004)
12. Richardson, T.J., Urbanke, R.L.: The capacity of low-density parity-check codes under
message-passing decoding. IEEE Trans. Inf. Theor. 47(2), 599–618 (2001)
13. Richardson, T.J., Amin Shokrollahi, M., Urbanke, R.L.: Design of capacity-approaching
irregular low-density parity-check codes. IEEE Trans. Inf. Theor. 47(2), 619–637 (2001)
14. Hou, J., Siegel, P.H., Milstein, L.B.: Performance analysis and code optimization of low
density parity-check codes on rayleigh fading channels. IEEE J. Sel. Areas Commun. 19(5),
924–934 (2001)
15. Johnson,S.J., Weller, S.R.: Constraining LDPC degree distributions for improved error floor
performance. IEEE Commun. Lett. 10(2), 103–105, (2006)
Chapter 9
LT and Raptor Codes

To partially compensate the inefficiency of random codes, we can use Reed–Solomon


codes, these codes can be decoded from a block with the maximum possible number
of erasures in time quadratic in the dimension. But in practice, these algorithms are
often too complicated and quadratic running times are still too large for many
applications. Hence, a new class of codes is needed to construct robust and reliable
transmission schemes and such a class of codes is known as fountain codes. This
fountain codes should possess a fast encoder and decoder to work in practice. Luby
invented the first class of universal fountain codes [1], in which the decoder is capable
of recovering the original symbols from any set of output symbols whose size is close
to optimal with high probability. The codes in this class are called LT codes. It is
important to construct universal Fountain codes to many applications which have fast
decoding algorithms and the average weight of an output symbol is a constant, such a
class of Fountain codes are called as Raptor codes [2] and the basic idea behind
Raptor codes is a pre-coding of the input symbols prior to the application of an
appropriate LT code. This chapter discusses the encoding and decoding of LT and
Raptor codes.

9.1 LT Codes Design

The rateless LT codes generate the limitless number of output symbols by using the
encoding of a finite number of message symbols.
By receiving a given number of output symbols, each receiver can decode them
successfully. The LT codes are the first universal erasure-correcting codes that pro-
vide successful communication over a binary erasure channel (BEC) for any erasure
probability. The LT codes have a various types of applications and advantages.

Electronic supplementary material The online version of this chapter (doi:10.1007/978-81-


322-2292-7_9) contains supplementary material, which is available to authorized users.

© Springer India 2015 305


K. Deergha Rao, Channel Coding Techniques for Wireless Communications,
DOI 10.1007/978-81-322-2292-7_9
306 9 LT and Raptor Codes

In particular, a transmitter having a LT code uses a single code for efficient trans-
mission in broadcast networks, where a single transmitter transferring a message to
multiple receivers simultaneously over different channels.
The transmitter with information message u consisting of k source symbols
generates an infinite number of encoding symbols, which are broadcasted succes-
sively. Due to the property of an ideal Fountain code, the receiver is able to
reconstruct the entire source message reliably from any k received encoding
symbols. If symbols are erased, an ideal Fountain code receiver will just wait for k
encoding symbols before reconstruct the information message.
In the practical implementations of LT code, random number generator was
employed to determine the degree and neighbors of an encoding symbol. However,
the key to make LT code work well is the degree distribution used in the encoding
procedure.

9.1.1 LT Degree Distributions

Choosing a good degree distribution is the key to make LT code work well in
practice and a couple of them are as follows:
A good degree distribution should meet the following two requirements: First, as
few encoding symbols as possible on average are required to ensure successful
recovery of source symbols; Secondly, the average degree of the encoding symbols
shall be as low as possible.
Ideal Solition Distribution [1]
Addition of input symbols to the ripple at the same rate as they are processed is the
basic property required for a good degree distribution and hence the name Soliton
distribution, as a soliton wave balances dispersion and refraction perfectly. The
ideal solition distribution is given by

1=k i¼1
lISD ðiÞ ¼ ð9:1Þ
1=iði  1Þ i ¼ 2; 3; . . .; k

Ideal solition distribution ensures that at each subsequent step, all the release
probabilities are identical to 1=k. When the number of encoding symbols is equal,
there is one expected ripple generated at each processing step and the source
symbols can be ideally recovered after k processing step. In practice, ideal solition
distribution works poorly.
9.1 LT Codes Design 307

Robust Solition Distribution [1]


pffiffiffi
Define RSD ¼ c  lnðk=eÞ k. In this distribution, we computed lRSD as follows:
8
< RSD =ik i ¼ 1; 2; . . .; round ðk=RSD Þ  1
sðiÞ ¼ RSD lnðRSD =eÞ=k i ¼ roundðk=RSD Þ ð9:2Þ
:
0 else

X
k
b¼ lISD ðiÞ þ sðiÞ
i¼1

lRSD ðiÞ ¼ ðlISD ðiÞ þ sðiÞÞ=b; i ¼ 1; 2; . . .; k ð9:3Þ

Robust solition distribution is an improvement of the ideal solition distribution


which is not only viable but practical too.
The term RSD is used for the RSD above is thought as the size of the ripple.
RSD is simply understood as the number of information symbols, and at each
decoding step, its degree is one. The ripple is the set of covered input symbols that
have not yet been processed. By following relation, we can predetermined this
value
pffiffiffi
RSD ¼ c  lnðk=eÞ k ; ð9:4Þ

where c and e are two parameters. c controls the mean of degree distribution and e is
the allowable failure probability of the decoder to recover the source symbols. The
smaller the value of c, the greater the probability of low degrees.
In the encoding for LT codes, the degree distribution of information symbols and
output symbols should be a uniform distribution and a RSD, respectively.
Luby suggested that the number of received encoded symbols be bk. Then, the
corresponding decoding overhead is

e ¼ n=k  1
¼ 1=RSD  1
¼b1 ð9:5Þ

The average degree increases logarithmically versus the code dimension. As the
code dimension k is getting larger, the overhead decreases; this is because b is a
decreasing function with k, but the sparseness of the generator matrix becomes
lower and lower.
308 9 LT and Raptor Codes

9.1.2 Important Properties of the Robust Soliton Distribution


pffiffiffi 2 
Property 1: The number of encoding symbols is n ¼ k þ O k  ln ðk=eÞ :
Property 2: The average degree of an encoding symbol is D ¼ Oðlnðk=ÞÞ:
Property 3: The decoder fails to recover the data with probability at most e from a
set of n encoding symbols.

One can refer [1] for proofs of these properties.

9.1.3 LT Encoder

The stepwise procedure to produce infinite output symbols, from k input symbols
fS1 ; S2 ; . . .; Sk g is as follows:
Step 1: Consider an output degree d randomly from a degree distribution qðd Þ
Step 2: Select d district input symbols uniformly at random from fS1 ; S2 ; . . .; Sk g
Step 3: Perform exclusive-OR of these d input symbols to obtain the output symbol

ci ¼ Si;1  Si;2      Si;d

A generator matrix G also can be defined such that the output symbols can be
expressed as follows:

c ¼ s  G; ð9:6Þ

where s denotes the input vector. Modulus-2 addition is used during the matrix
multiplication.
The following MATLAB function generates the matrix G by using robust soliton
density.
9.1 LT Codes Design 309

Program 9.1 MATLAB function to generate G using robust soliton density


310 9 LT and Raptor Codes

9.1.4 Tanner Graph of LT Codes

The Tanner Graph of LT codes is similar to the Tanner graph used in LDPC codes,
whereas the check nodes and variable nodes usually used in LDPC codes are
replaced with input nodes and output nodes of LT codes as shown in Fig. 9.1.

9.1.5 LT Decoding with Hard Decision

The decoder uses the Decoder recovery rule [1] to repeatedly recover input sym-
bols. The Decoder recovery rule is as follows:
If there is at least one encoding symbol that has exactly one neighbor then the neighbor can
be recovered immediately since it is a copy of the encoding symbol. The value of the
recovered input symbol is exclusive-ORed into any remaining encoding symbols that also
have that input symbol as a neighbor, the recovered input symbol is removed as a neighbor
from each of these encoding symbols and the degree of each such encoding symbol is
decreased by one to reflect this removal.

Based on the above rule, the stepwise decoding process can be described as
follows:
Step 1: Find an output symbol yi connected to only one input symbol mj .
Step 2: Set mj ¼ yi .
Step 3: Exclusive-OR mj to all the output symbols connected to mj .
Step 4: Remove all the edges connected to mj .
Step 5: Repeat step 1 to 4 until all input symbols are recovered.

Fig. 9.1 Tanner graph of LT Output


Input
codes Nodes
Nodes
9.1 LT Codes Design 311

The output degree distribution is a critical part of LT codes. If there is no output


symbol with degree one during the iteration, the decoding process will halt, which
indicates a decoding failure. Thus, optimal output degree is required to ensure a
successful decoding. In Luby’s paper, two output degree distributions, i.e., ideal
soliton distribution and robust soliton distribution, are presented [1].
Ideal soliton distribution adds only one input node to the ripple each iteration
round, consuming the fewest output symbols to recover all of the input symbols.
However, it performs poorly in practice because the probability of ripple vanishing,
i.e., the error probability or the decoding failure rate is high. Robust soliton dis-
tribution attempts to lower the error probability by slightly increasing the proba-
bility of degree one. The following example illustrates the decoding process.
Example 9.1 Consider the following Tanner graph of a LT code
1 2 3

1 2 3 4

and if the received bits vector y ¼ ½ 1 0 1 1 . Decode the received bits vector
to obtain the message bits.
Solution The boxes represent the message bits, while circles represent output bits.
An output bit is the factor graph. There are three message bits and four output bits,
which have values y1 y2 y3 y4 ¼½ 1 0 1 1 . During the first iteration, the only
output bit that is connected to one message bit is the first output bit (see Fig. a). This
value is copied to m1 , delete the output bit (see Fig. b), and then the new value of
m1 gets added to y2 and y4 . This disconnects m1 from the graph (see Fig. c). At the
start of the second iteration, y4 is connected to the single message bit m2 . Now one
sets m2 equal to y4 (see Fig. d), and then adds this value to y2 and y3 . This
disconnects m2 from the graph (see Fig. e). Finally, one sees that the output bits
connected to m3 are equal as expected and can be used to restore m3 (see Fig. f).
312 9 LT and Raptor Codes

(a) (b) 1

1 0 1 1 0 1 1

(c) 1 (d) 1 0

1 1 0 1 1

(e) 1 0 (f) 1 0 1

1 1

For success of the LT decoder with high probability, decoding graph of LT


codes needs to have at least k lnðkÞ edges. In [2], Raptor codes are introduced to
relax this lower bound on the number of edges in the decoding graph.

9.1.6 Hard-Decision LT Decoding Using MATLAB

The following example illustrates decoding of LT codes using MATLAB.


9.1 LT Codes Design 313

Example 9.2 Write a MATLAB program to implement hard-decision LT decoding


assuming received vector y ¼ ½ 1 0 1 1 0 1 1  when the following gen-
erator matrix is used to encode the code word.
2 3
1 1 0 1 0 0 0
60 0 1 1 0 0 17
G¼6
41
7
0 0 1 1 0 05
0 0 1 0 1 1 1

Solution The following MATLAB program decodes the received vector y. The
output of the program gives the decoded vector ½ 0 0 1 1 .

Program 9.2 MATLAB program for Hard Decision LT Decoding


314 9 LT and Raptor Codes

9.1.7 BER Performance of LT Decoding over BEC Using


MATLAB

At the receiving end, a receiver collects n = k(1 + ǫ) output symbols. Here, ǫ is


called the overhead. Note that several erasures need to be discarded till n unerased
output symbols are obtained. The following MATLAB program illustrates the BER
performance of LT codes over BEC for different overheads and for different erasure
probabilities.

Program 9.3 MATLAB program for the BER Performance of LT Decoding over
BEC

The overhead versus BER performance of LT decoding obtained from the above
program for three different erasure probabilities is shown in Fig. 9.2.
9.1 LT Codes Design 315

0
10
Pe=0.1
Pe=0.2
-1 Pe=0.3
10

-2
10
BER

-3
10

-4
10

-5
10
0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Overhead

Fig. 9.2 BER performance of LT decoding over BEC

From Fig. 9.2, it is observed that the BER decreases as overhead increases for a
fixed erasure probability and further the BER for low erasure probability is less than
that the high erasure probability.

9.2 Systematic LT Codes

Many researchers endeavored to improve the performance of LT codes to protect


the data over the wireless Internet, where fading, noise, and packet erasures are
encountered. For the sake of improving, the error correction capability of LT codes
and the complexity of these schemes tend to be increased. The soft decoding of LT
codes is using the probabilistic decoding technique of low density parity check
codes (LDPC) [3].
Hence, to improve the LT code’s performance in hostile wireless channels, a
systematic LT code as shown in Fig. 9.3 has been suggested in [4] by expanding LT
code’s ½K  N  generator matrix with the aid of a unity matrix having a size of
½K  K .
The relation between the systematic LT generator matrix G and the parity check
matrix H is as follows:
Consider a generator matrix GKN ¼ ½IKK =AKM , where I is an identity matrix
having a size of ½K  K and A is a non-singular matrix having a size of ½K  M.
316 9 LT and Raptor Codes

Fig. 9.3 The systematic LT generator matrix

Then, the parity check matrix is calculated as H ¼ ½AT =I 0 ; where AT is the


transpose of A and I 0 is an identity matrix having a size of ½M  M [5], where
N ¼ K þ M is the number of columns in G and K is the number of rows in G.

9.2.1 Systematic LT Codes Decoding

The implementation of the LT decoding process is similar to that of the classic LDPC
decoding procedure. The LT decoder’s soft values are set to a value corresponding to
the demodulator’s soft output. The decoder’s soft values which denote the log
likelihood ratios (LLRs) are passed from the check nodes to the variable nodes and
vice versa are then iteratively updated after each decoding iteration.
The LT decoder outputs its tentative hard decision and checks after each iteration
whether the product of the corresponding code word and the transpose of the PCM
H is equal to zero, if not, the LT decoding process will be continued iteratively until
the output code word becomes legitimate or the maximum affordable number of
iterations is exhausted.

9.2.2 BER Performance Analysis of Systematic LT Codes


Using MATLAB

9.2.2.1 BER Performance of Systematic LT Codes Over BEC

The performance of the systematic LT(1000, 3000) code on BEC channel with
erasure probabilities ½ 0:1 0:2 0:4 0:6 0:8  is illustrated using the following
MATLAB program. In this, the Robust Soliton degree distribution with parameters
c ¼ 0:1 and e ¼ 0:5 is used.
Figure 9.4 shows the performance analysis of the systematic LT code over the
BEC having different erasure probabilities qe .
9.2 Systematic LT Codes 317

Program 9.4 MATLAB program for the BER Performance of systematic LT code
in BEC channels

function [y] = bec_channel(x, e) %BEC_CHANNEL Simulates binary erasure channel with


erasure probability e
y = x;
y( rand(size(abs(x)))<e ) =-1;% = erasure, otherwise 0s and 1s are bits
end

function M=becebest(xHat1b,H1,iter)
H=H1;
M=xHat1b;
[N1 N2]=size(H);
for i=1:iter
for j=1:N1
ci = find(H(j, :));
d=find(M(ci)~=-1);
d1=find(M(ci)==-1);
if ((length(d)>=2) & (length(d1)==1))
E(j,ci(d1))=mod(sum(M(ci(d))),2);
else
E(j,ci(d1))=-1;
end
end
for j=1:N2
ri = find(H(:,j));
if(M(j)==-1)
for ii=1:length(ri)
if( E(ri(ii),j)~=-1)
M(j)=E(ri(ii),j);
end
end
end
end
end
318 9 LT and Raptor Codes

9.2.2.2 BER Performance of Systematic LT Codes Over AWGN


Channel

The performance of the systematic LT(1000, 2000) code on AWGN channel is


illustrated using the following MATLAB program. In this, the Robust Soliton
degree distribution with parameters c ¼ 0:1 and e ¼ 0:5 is used.
Figure 9.5 shows the performance analysis of the systematic LT code over the
AWGN having different Eb =N0 s.
9.2 Systematic LT Codes 319

Program 9.5 MATLAB program for the BER Performance of systematic LT code
in AWGN channels using BPSK modulation
320 9 LT and Raptor Codes

0
10

-1
10

-2
BER

10

-3
10

-4
10
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1-Pe

Fig. 9.4 BER versus 1-Pe performance of the systematic LT code in BEC channels

0
10
Systematic LT(1000,2000),AWGN channel, iterations=1
Systematic LT(1000,2000),AWGN channel, iterations=2
Systematic LT(1000,2000),AWGN channel, iterations=4
Systematic LT(1000,2000),AWGN channel, iterations=6
-1
10
BER

-2
10

-3
10

-4
10
0 0.5 1 1.5 2 2.5 3 3.5 4
Eb/N0(dB)

Fig. 9.5 BER versus Eb =N0 performance of the systematic LT code in AWGN channels using
BPSK modulation
9.2 Systematic LT Codes 321

9.2.2.3 BER Performance of Systematic LT Codes Over


AWGN-Contaminated BEC

The schematic of the encoding and decoding of the LT coding over AWGN-
contaminated BEC is shown in Fig. 9.6.
The performance of the systematic LT(1000, 3000) code on AWGN-contami-
nated BEC with erasure probability 0.1 is shown in Fig. 9.7. In this, the Robust
Soliton degree distribution with parameters c ¼ 0:1 and e ¼ 0:5 is used.

Encoder Decoder

Fig. 9.6 LT encoding and decoding over AWGN-BEC

0
10

-1
10
BER

-2
10

-3
10

Systematic LT(1000,2000),AWGN-BEC, Pe=0.1,iterations=1


Systematic LT(1000,2000),AWGN-BEC, Pe=0.1, iterations=2
Systematic LT(1000,2000),AWGN-BEC, Pe=0.1, iterations=4
Systematic LT(1000,2000),AWGN-BEC, Pe=0.1, iterations=6
-4
10
0 0.5 1 1.5 2 2.5 3 3.5 4
Eb/N0(dB)

Fig. 9.7 BER versus Eb/N0 performance of the systematic LT code in AWGN-contaminated
BEC channel
322 9 LT and Raptor Codes

9.3 Raptor Codes

Raptor codes are concatenation of an erasure-correcting pre-code and an LT code


[2]. Figure 9.8 shows a graphical presentation of a Raptor code. The output symbols
of the Raptor code are sampled independently from the distribution. Low density
parity check (LDPC) codes and Tornado codes are examples of the pre-code. The
decoding graph of the LT code with k-symbol message block should have at least k
ln(k) edges, which results in large overhead or higher complexity of encoding and
decoding. The raptor codes reduce the lower bound on the number of edges in the
bipartite graph, and hence, recover message symbols at a lower overhead at almost
linear complexity. The pre-code of the raptor code recovers the Message symbols
that are left undecoded by the LT decoders due to lower head.
Raptor codes are being used in commercial systems of Digital Fountain, For
example, a Silicon Valley-based startup specializing in fast and reliable delivery of
data over heterogeneous networks.
The k symbols are input symbols of a Raptor code which are used to construct
the code word in C consisting of n intermediate symbols and output symbols are the
symbols generated by the LT code from the n intermediate symbols.
Raptor code encoding algorithm is as follows: an encoding algorithm for C is
used to generate a code word in C corresponding to the given k input symbols.
Then, an encoding algorithm for the LT code with distribution Xð xÞ is used to
generate the output symbols.
Raptor code decoding algorithm of length m can recover the k input symbols
from any set of m output symbols and errors with probability which is at most 1=k C
for some positive constant.

Redundant nodes

Precoding

LT-coding

Fig. 9.8 Raptor codes


9.4 Problems 323

9.4 Problems

1. Suppose the generator matrix G of the LT code is


2 3
1 1 0 1 0 1 0
61 0 1 1 0 0 17
G¼6
41
7
0 0 1 1 1 15
0 0 1 0 1 1 1

and the received encoded symbols are ½ 1 0 1 1 0 0 0 . Decode the


received vector to obtain message symbols.
pffiffiffi 
2. Show that the number of encoding symbols is n ¼ k þ O k  ln2 ðk=eÞ :
3. Show that the average degree of an encoding symbol is D ¼ Oðlnðk=ÞÞ
4. Show that the decoder fails to recover the data with probability at most e from a
set of N encoding symbols.

9.5 MATLAB Exercises

1. Write a MATLAB program for LT encoding and decoding with non-binary


message symbols using robust Soliton distribution.
2. Write a MATLAB program to generate H matrix for a systematic LT code with
k message symbols and 0.5 overhead.

References

1. Luby, M.: LT codes. In: Proceeding of the 43rd Annual IEEE Symposium on Foundations of
Computer Science, pp. 271–282 (2002)
2. Shokrollahi, A.: Raptor codes. IEEE Trans. Inf. Theor. 52(6), 2551–2567 (2006)
3. Richardson, T.J., Urbanke, R.L.: The capacity of low density parity check codes under
message-passing decoding. IEEE Trans. Inf. Theor. 47(2), 599–618 (2001)
4. Nguyen, T.D., Yang, L.-L., Hanzo, L.: Systematic luby transform codes and their soft decoding.
In: IEEE SiPS’07, pp. 67–72. Shanghai, 17–19 Oct 2007
5. Gallager, R.: Low density parity check codes. IRE Trans. Inf. Theor. 8(1), 21–28 (1962)
Chapter 10
MIMO System

10.1 What Is MIMO?

A channel with multiple antennas at the transmitter and multiple antennas at the
receiver is called as a multiple-in-multiple-out (MIMO) channel, whereas the SISO
channel has single antenna at the transmitter and a single antenna at the receiver.
A MIMO channel representation is shown in Fig. 10.1.
The key advantages of MIMO system are increased reliability obtained through
diversity and higher data rate obtained through spatial multiplexing [1]. These two
concepts are used together in MIMO systems.
In a diversity system, the same information is transmitted through multiple
transmit antennas and received at multiple receive antennas simultaneously. Since
the fading for each link between a pair of transmit and receive antennas is con-
sidered to be independent and the same information travels through diverse paths
and if one path is weak, a copy of information received through the other path may
be good, and hence, the probability for accurate detection of the information
increases.
In a spatial multiplexing, different information can be transmitted simultaneously
over multiple antennas, similar to the idea of an OFDM signal, thereby boosting the
system throughput or capacity of the channel.

Electronic supplementary material The online version of this chapter (doi:10.1007/978-81-


322-2292-7_10) contains supplementary material, which is available to authorized users.

© Springer India 2015 325


K. Deergha Rao, Channel Coding Techniques for Wireless Communications,
DOI 10.1007/978-81-322-2292-7_10
326 10 MIMO System

Fig. 10.1 A MIMO channel Transmitter Receiver

10.2 MIMO Channel Model

10.2.1 The Frequency Flat MIMO Channel

Let hji be a complex number represents the channel gain between ith transmit
antenna and jth receive antenna. At a certain time instant, if the symbols
fs1 ; s2 ; . . .; sNT g are transmitted via NT antennas, then the received signal at antenna
j can be expressed as

X
NT
yj ¼ hji sj þ gi ð10:1Þ
i¼1

With i ¼ 1; 2; . . .; NT transmitter antennas and j ¼ 1; 2; . . .; NR receiver antennas,


Eq. (10.1) can be represented in matrix form as
2 3 2 32 3 2 3
y1 h11 h12  h1NT s1 g1
6 7 6 h2NT 7 6 7 6 7
6 y2 7 6 h21 h22  7 6 s 2 7 6 g2 7
6 .. 7¼6 .. .. .. 76 .. 7 þ 6 .. 7 ð10:2Þ
4 . 5 4 . . . 54 . 5 4 . 5
y NR h NR 1 h NR 2 . . . h NR NT sN T g NR

or more compactly as

y ¼ Hs þ g ð10:3Þ

The fading coefficients H are independent (with respect to both i and j) and
identically distributed (i.i.d). The additive noise at receiver antenna is independent
10.2 MIMO Channel Model 327

(with respect to g), identically distributed. It is assumed that the signaling is subject
to the average power constraint
 
E s 2   P ð10:4Þ

The H matrix contains the channel coefficients that distort the transmitted signal
amplitude and phase in time domain. The channel matrix H is estimated at the
receiver and transmitter transmits blindly without any idea of channel information.
If the receiver sends back the channel information to the transmitter, then the
transmitter is able to adjust the powers allocated to the antennas.
One attractive merit of MIMO systems is the increased antenna diversity, which
can alleviate the detrimental effect of flat fading. In a MIMO system with NT
transmit antennas and NR receive antennas, if the channels for any pair of transmit–
receive antennas are independent and experience flat fading, the maximum or full
diversity gain is NT NR . A common way of achieving the full diversity is through
space–time (ST) coding, which is discussed in the next chapter.

10.2.2 The Frequency-Selective MIMO Channel

In MIMO systems where any transmit–receive link is subject to multipath fading


independently and the channel impulse response is characterized by L resolvable
paths, the full diversity gain is NT NR L [2, 3]. In frequency-selective MIMO
channels, OFDM is usually applied to eliminate the ISI and ICI. To achieve full
diversity, coding is used across OFDM subchannels, OFDM blocks, and transmit
antennas.

10.2.3 MIMO–OFDM System

In broadband wireless systems, the MIMO channels are severely affected by the
frequency-selective fading or potential multipath fading. This fading effect com-
plicates the design of ST codes because of ISI. To overcome this problem, MIMO
can be combined with OFDM system, which is referred to as MIMO–OFDM. The
combination of MIMO and OFDM has the potential of meeting this stringent
requirement since MIMO can improve the capacity and the diversity gain and
OFDM can mitigate the detrimental effects due to multipath fading. The schematic
block diagram of the MIMO–OFDM system is shown in Fig. 10.2.
The schematic block diagram of MIMO–OFDM system with NT transmit, NR
receive antennas, and N-tone OFDM is illustrated in Fig. 10.2. The incoming bit
328 10 MIMO System

OFDM OFDM
C MOD DEMOD D
O E
D C ∼
I O
N D
G I
OFDM N
OFDM G
DEMOD
MOD

Fig. 10.2 MIMO OFDM system

stream is first mapped into a number of data symbols by using modulation tech-
niques such as BPSK, QPSK, and QAM. Then, a block of data symbols is encoded
into a code word matrix of size NT  NT ; and transmitted through NT transmit
antennas in T OFDM blocks, each block having N subchannels. After appending
the cyclic prefix on each OFDM block, the blocks will be transmitted through NT
transmit antennas. After passing through the MIMO channels, first the received
signals will be sent to the reverse OFDM (cyclic prefix removal, DFT) and then sent
to the decoder. If the channel state information (CSI) is available at the receiving
side, the optimal ML detection will be performed.

10.3 Channel Estimation

In training-based channel estimation, the used training symbols or pilot tones are
known to both the transmitter and the receiver. The knowledge of transmitted pilot
symbols at the receiver is exploited to estimate the channel. The block-type pilot

(a) (b)
Time Time

Carrier Frequencies Carrier Frequencies

Fig. 10.3 a Block pilot. b Comb pilot


10.3 Channel Estimation 329

arrangement is shown in Fig. 10.3a in which pilot symbols are transmitted peri-
odically for channel estimation. The comb-type pilot arrangement is shown in
Fig. 10.3b, where the pilots are transmitted at all times but with an even spacing on
the subcarriers for channel estimation.
The estimation can be performed by using LS [4–7]. The training symbols for N
subcarriers can be represented by the following diagonal matrix assuming that all
subcarriers are orthogonal.
2 3
sð0Þ 0  0
6 .. 7
6 0 sð1Þ h . 7
S¼6
6 .. ..
7
7 ð10:5Þ
4 . h . 0 5
0  0 s ð N  1Þ

where sðkÞ denotes a pilot tone at the kth subcarrier, with EfsðkÞg ¼ 0;
Varf sðkÞg ¼ r2s ; k ¼ 0; 1; 2; . . .; N  1. For a given channel gain HðkÞ corre-
sponding to the kth subcarrier, the received training signal YðkÞ can be represented
as
2 3 2 32 3 2 3
Y ð 0Þ sð0Þ 0  0 H ð 0Þ gð 0Þ
6 7 6 .. 76 7 6 7
6 Y ð 1Þ 7 6 h . 76 H ð 1Þ 7 6 gð 1Þ 7
7¼6 76
0 sð1Þ
Y ,6
6 .. 7 6 76 ..
7þ6
7 6 ..
7
7
4 . 5 6 .. .. 74 . 5 4 . 5
4 . h . 0 5
Y ð N  1Þ 0  0 s ð N  1Þ H ðN  1Þ gð N  1 Þ
¼ SH þ g
ð10:6Þ

where g is a noise vector with EfgðkÞg ¼ 0; VarfgðkÞg ¼ r2g ; and k ¼ 0; 1; 2; . . .; N  1.

10.3.1 LS Channel Estimation

The LS is a well-known method and widely used for estimation due to its sim-
plicity. LS channel estimate is represented by

b LS ¼ S1 Y
H ð10:7Þ

The pilot subcarriers are interpolated to estimate the channel for data symbols.
330 10 MIMO System

10.3.2 DFT-Based Channel Estimation

The DFT-based channel estimation technique improves the performance of LS


channel estimation by removing the effect of noise outside the maximum channel
delay [8].
n oN¼1
The IDFT of the channel estimate H b ðk Þ is written as
k¼0
n o
b ð k Þ ¼ hð nÞ þ gð nÞ , b
IDFT H h ðnÞ; n ¼ 0; 1; . . .; N  1 ð10:8Þ

where Hb ðkÞ is the estimate of the channel H at the kth subcarrier, obtained by LS,
and gðnÞ denotes the noise component. If the maximum channel delay is dCd , then

b hðnÞ þ gðnÞ; n ¼ 0; 1; . . .; dCd  1
h DFT ðnÞ ¼ ð10:9Þ
0; otherwise

and transformed back to the frequency domain as follows:


n o
b DFT ðnÞ ¼ DFT b
H h DFT ðnÞ ð10:10Þ

10.3.3 MIMO–OFDM Channel Estimation

Using Eq. (10.7), the LS estimate of the channel between jth transmitter and ith
receiver antenna for MIMO–OFDM system can be expressed as
 1
b ðLS
H
j;iÞ
¼ sð jÞ Y ðiÞ ð10:11Þ

sð jÞ is an N  N diagonal matrix with the pilots of the jth transmit antenna as


diagonal elements, and Y ðiÞ is received vector of length N at receiver antenna i.
10.3 Channel Estimation 331

10.3.4 Channel Estimation Using MATLAB

The following MATLAB program is written using built-in MATLAB function


“interpolation” to evaluate the MSE performance of LS and LS-DFT methods for
channel estimation. For different Eb =N0 0 s, the mean square errors (MSE) for LS and
LS-DFT are shown in Fig. 10.4. From Fig. 10.4, it is observed that LS-DFT
performs better than the LS method for channel estimation.

1
10
LS-linear
LS-linearDFT

0
10
MSE

-1
10

-2
10

-3
10
0 5 10 15 20 25 30
Eb/N0 (dB)

Fig. 10.4 Eb =N0 versus MSE for channel estimation using LS and LS-DFT
332 10 MIMO System

Program 10.1 MATLAB program for Channel Estimation Using LS and LS-DFT
methods
10.4 MIMO Channel Decomposition 333

10.4 MIMO Channel Decomposition

A MIMO channel can be looked as a set of independent SISO channels using the
singular value decomposition (SVD). The process requires precoding at the trans-
mitter and receiver shaping at the receiver as shown in Fig. 10.5. This requires
knowledge of the channel at the transmitter. The H matrix can be written in SVD
form as

H ¼ URV H ð10:12Þ

where U and V are unitary matrices (U H U ¼ INR and V H V ¼ INT ) and R is a NR 


NT diagonal matrix of the singular values ðrj Þ of H matrix. If H is a full-rank
matrix, there are min ðNR ; NT Þ of nonzero singular values and hence with the same
number of independent channels.
The received signal ~y is given by

~y ¼ U H y ð10:13Þ

The above equation can be rewritten as

~y ¼ U H ðHs þ gÞ ð10:14Þ

Now, substituting Eq. (10.12) in the above equation, we obtain



~y ¼ U H URV H s þ g ð10:15Þ

Since s ¼ V~s; Eq. (10.15) can be rewritten as



~y ¼ U H URV H V~s þ g
¼ U H URV H V~s þ U H g ð10:16Þ
¼ R~s þ ~g

From Eq. (10.16), it can be observed that the output is the product of precoded
input signal ~s and the singular value matrix R. The distribution of the noise does not
change by multiplying the noise g by the unitary matrix U H .

∼ Η

Precoding Receive shaping

Fig. 10.5 Decomposition of a MIMO channel with full CSI


334 10 MIMO System

Example 10.1 Find a parallel channel model for a MIMO system, the H matrix of
which is given by
2 3
0:4 þ j0:6 j 2
H ¼ 4 0:8 0:4 þ j0:2 1:5  j0:6 5
j0:6 0:7 0:1 þ j1:1

Solution The SVD decomposition using MATLAB gives


2 3
0:4390  j0:6062 0:2203 þ j0:3263 0:2389  j0:4773
6 7
U ¼ 4 0:4426  j0:2417 0:8023  j0:0633 0:2637 þ j0:1685 5
0:3571  j0:2408 0:0817 þ j0:4366 0:7817  j0:0777
2 3
3:0659 0 0
6 7
R¼4 0 1:2785 0 5
0 0 0:0748
2 3
0:1075 0:9289 0:3543
6 7
V ¼ 4 0:3528 þ j0:1955 0:0505  j0:3056 0:0252  j0:8607 5
0:5537  j0:7206 0:1978  j0:0449 0:3505 þ j0:1011

The center matrix R contains the singular values ðrj Þ of the H matrix. The rank
of the matrix is equal to the number of singular values. This process decomposes
the matrix channel into three independent SISO channels, with gains of
3:0659; 1:2785 and 0:0748, respectively, as shown in Fig. 10.6. The number of
significant eigenvalues specifies the maximum degree of diversity. The larger a
particular eigenvalue, the more reliable is that channel. The most important benefit
of the SVD approach is that it allows for enhanced array gain—the transmitter can
send more power over the better channels and less (or no) power over the worst
ones. Thus, the first channel with the gain of 3:0659 will have better performance
than the other two. The number of principle components is a measure of the
maximum degree of diversity that can be realized in this way.

Fig. 10.6 SVD


decomposition of a matrix
channel into three ∼ ∼ ∼ ∼ H ∼
independent SISO channels

∼ ∼ ∼ ∼ H ∼

∼ ∼ ∼ ∼ H ∼
10.5 MIMO Channel Capacity 335

10.5 MIMO Channel Capacity

Let s and y be NT and NR length vectors containing the transmitted and received
symbols, respectively, for a MIMO system with NT transmit and NR receive
antennas. Then, the received signal y can be rewritten in a matrix form as follows:
rffiffiffiffiffiffi
Es
y¼ Hs þ g ð10:17Þ
NT

where
y ¼ ½y1 y2 . . . yNR 
s ¼ ½s1 s2 . . . sNT  
g ¼ g 1 g 2 . . . g NR
Es is the total energy of NT symbols transmitted.

10.5.1 Capacity of Deterministic MIMO Channel When CSI


Is Known to the Transmitter

The capacity of a deterministic channel is defined by Shannon as

C ¼ max I ðs; yÞ bits/channel use ð10:18Þ


f ðsÞ

Iðs; yÞ is called the mutual information of s and y. The capacity of the channel is the
maximum information that can be transmitted from s to y by varying the channel
probability density function (pdf). f ðsÞ is the pdf of the transmit signal s. From
information theory, we get the relationship of mutual information between two
random variables as a function of their entropy as

I ðs; yÞ ¼ H ð yÞ  H ðyjsÞ ð10:19Þ

H ðyjsÞ ¼ H ðgÞ ð10:20Þ

Using Eq. (10.20), Eq. (10.19) can be rewritten as

I ðs; yÞ ¼ H ð yÞ  H ðgÞ ð10:21Þ

The second term is constant for a deterministic channel because it is a function


of noise. Hence, mutual information is maximum only when the term HðyÞ is
maximum.
336 10 MIMO System

Using Eq. (10.17), the autocorrelation matrix of y can be written as


" rffiffiffiffiffiffi rffiffiffiffiffiffi H #
  Es Es
Ryy ¼ E yy ¼ E
H
Hs þ g Hs þ g
NT NT

Es
¼E Hss H þ gg
H H H
NT
Es  H H  ð10:22Þ
¼ E Hss H þ ggH
NT
Es    
¼ HE ssH H H þ E ggH
NT
Es
¼ HRss H H þ No INR
NT

where Rss is the autocorrelation of the transmitted signal vector s and No is the
power spectral density of the additive noise fgi gNi¼1
R
. The entropy HðyÞ is maxi-
mized when both s and y are zero-mean circular symmetric complex Gaussian
(ZMCSCG) random variables. Then, the HðyÞ and HðgÞ are given by
  
H ð yÞ ¼ log2 det peRyy ð10:23Þ

HðgÞ ¼ log2 fdetðpeNo INR Þg ð10:24Þ

Using Eqs. (10.23) and (10.24), it is shown in [9] that the mutual information
given by Eq. (10.21) can be expressed as
 
Es
I ðs; yÞ ¼ log2 det INR þ HRss H H bits=s=Hz ð10:25Þ
NT No

Since SNR ¼ NEos , Eq. (10.25) can be rewritten as


 
SNR
I ðs; yÞ ¼ log2 det INR þ HRss H H
bits=s=Hz ð10:26Þ
NT

From the above equation, we can write the expression for capacity as
 
SNR
C ¼ I ðs; yÞ ¼ max log2 det INR þ HRss H H ð10:27Þ
TrðRss Þ¼NT NT

It should be noted here that trace of Rss matrix is TrðRss Þ ¼ NT , when the
transmission power for each transmit antenna is assumed to be 1.
10.5 MIMO Channel Capacity 337

10.5.2 Deterministic MIMO Channel Capacity When CSI


Is Unknown at the Transmitter

When H is not known at the transmitter side, we can assume equal power distri-
bution among the transmitters, Rss is an identity matrix, that is, Rss ¼ INT , and
Eq. (10.27) becomes
 
SNR
C ¼ log2 det INT þ HH H ð10:28Þ
NR

This is the capacity equation for the MIMO channels with equal power. It should
be noted that for a large number of transmit antennas and a fixed number of receive
antennas, the law of large numbers yields

1
lim HH H ¼ IN T ð10:29Þ
N T !1 N R

Thus, the MIMO channel capacity for large NT becomes

C ¼ NR log2 detfINT þ SNRg ð10:30Þ

Example 10.2 Given the following (3  3 MIMO) channel, find the capacity of this
channel, when CSI is known at the receiver and unknown at the transmitter, SNR ¼
10 dB and bandwidth equal to 1 kHz. Compare this capacity calculation to that
using SVD.
2 3
0:4 þ j0:6 j 2
H ¼ 4 0:8 0:4 þ j0:2 1:5  j0:6 5
j0:6 0:7 0:1 þ j1:1

Solution
2 3
5:52 2:88 þ j1:12 0:16  j3:14
HH H ¼ 4 2:88  j1:12 3:45 1:09  j1:25 5
0:16 þ j3:14 1:09 þ j1:25 2:07
 
SNR
C ¼ B log2 det INR þ HH H
NT
¼ 7:7306 kbps

The singular values are equal to 3:0659; 1:2785 and 0:0748.


338 10 MIMO System

The sum of the capacity of the three independent channels is equal to the same
quantity as above equation.
  
C ¼ B log 2 1 þ 3:06592  3:33 þ log 2 1 þ 1:27852  3:33

þ log 2 1 þ 0:07482  3:33
¼ 7:7306 kbps

10.5.3 Random MIMO Channel Capacity

10.5.3.1 Random MIMO Channel Capacity with CSI Known


at the Transmitter

It is assumed in Sect. 10.5.1 that the MIMO channels are deterministic. In general,
the MIMO channels are varying randomly. Hence, H is a random matrix and its
channel capacity is also randomly time-varying. In practice, assuming that the
random channel is an ergodic process, the MIMO channel capacity can be
expressed by
 
SNR
Cerg ¼ E max log2 det INR þ HRss H H ð10:31Þ
TrðRss Þ¼NT NT

where the subscript erg stands for ergodic.


If r is the rank of the matrix H and ki ði ¼ 1; 2; . . .; rÞ are the eigenvalues
H
(positive real numbers) obtained by the eigen decomposition of h HH i and if the
2
transmit power for the ith transmit antenna is given by pi ¼ E jsi j ; Eq. (10.31)
can be rewritten [8] as
 
SNR opt
Cerg ¼ E log2 det INR þ p ki ð10:32Þ
NT i
þ
NT 1
popt
i ¼ l ð10:33Þ
SNR ki

X
r
popt
i ¼ NT ð10:34Þ
i¼1
10.5 MIMO Channel Capacity 339

Fig. 10.7 Water-filling


power allocation algorithm

1 2 3 4 5 6 7 8
Number of Channels Power level
Noise level

where μ is a constant and ð xÞþ is defined by

ð xÞþ ¼ x for x  0
ð10:35Þ
ð xÞþ ¼ 0 for x\0

Equation (10.33) satisfying the constraint in Eq. (10.34) is the well-known


water-filling power allocation algorithm, which is illustrated in Fig. 10.7.
Figure 10.7 shows that more power must be allocated to the channel with higher
SNR. Furthermore, if an SNR is below the threshold given in terms of μ, no power
is allocated to the corresponding channels. As can be seen from Fig. 10.7, power is
not allocated to the channels 3 and 6.

10.5.3.2 Random MIMO Channel Capacity with CSI Unknown


at the Transmitter

When CSI is unknown at the transmitter, from Eq. (10.31), the ergodic capacity of
the random MIMO channel is given by
 
SNR
Cerg ¼ E log2 det INR þ HH H ð10:36Þ
NT

Equation (10.36) can be written in terms of the positive eigenvalues as [8]


 
SNR
Cerg ¼ E log2 det INR þ ki ð10:37Þ
NT

which is frequently known as an ergodic channel capacity.


340 10 MIMO System

The following MATLAB programs illustrate the ergodic capacity of i.i.d random
MIMO channel with CSI unknown at the transmitter.

Program 10.2 MATLAB program for ergodic capacity

Program 10.3 MATLAB Function program for ergcap

The ergodic capacity of i.i.d random MIMO channel with CSI unknown at the
transmitter with different transmitting and receiving antennas is shown in Fig. 10.8.
From Fig. 10.8, it is observed that the number of the transmitting and the
receiving antennas increases, the ergodic capacity increases.

10.5.3.3 Capacity of Correlated Random MIMO Channel

In general, the elements of H are correlated by an amount that depends on the


propagation environment as well as the polarization of the antenna elements and
spacing between them [10]. One possible model for H that takes the fading
10.5 MIMO Channel Capacity 341

45
N =1, N =1
T R
40 N =2, N =2
T R
N =4, N =4
T R
35

30

25
bps/Hz

20

15

10

0
0 5 10 15 20 25 30 35
SNR(dB)

Fig. 10.8 Ergodic capacity of i.i.d random MIMO channel with CSI unknown at the transmitter

correlation into account splits the fading correlation into two independent com-
ponents called receive correlation and transmit correlation, respectively [11, 12].
This amounts to modeling H as follows

1=2
H ¼ R1=2
r Hw Rt ð10:38Þ

where Rt and Rr are the transmit correlation and the receive correlation matrices,
respectively, Hw is a matrix with independent Gaussian elements with unity vari-
ance, and the superscript ½ stands for the Hermitian square root of a matrix. The
matrix Rr determines the correlation between the rows of H, and the matrix Rt
determines the correlation between the columns of H. The diagonal entries of Rt
and Rr are constrained to be unity. The correlation matrices Rt and Rr can be
measured or computed by assuming the scattering distribution around the transmit
and receive antennas. For uniform linear array at the transmitter and the receiver,
the correlation matrices Rt and Rr can be calculated according to two different
methods given in [13, 14]. From [14], we have the following Toeplitz structure
correlation matrices:
342 10 MIMO System

2 ðN 1Þ2
3 2 ðN 1Þ2
3
1 rt    rt T 1 rr    rr R
6   7 6   7
6 rt 1 7 6 rr 1 7
Rt ¼ 6 .. .. .. 7; Rr ¼ 6 .. .. .. 7
4 . . . 5 4 . . . 5
ðNT 1Þ2 ðNR 1Þ2
rt   1 rr   1
ð10:39Þ

where Rt and Rr represent r ðdt Þ and r ðdr Þ; respectively, and r ðd Þ is the approxi-
mation for the fading correlation between two adjacent antenna elements averaged
over all possible orientations of the two antennas in a given wave field which can be
expressed as [15]

r ðd Þ  exp 23K2 d 2 ð10:40Þ

where d is the distance in wavelengths between two antennas and K is the angular
spread.
From Eq. (10.28), then, the MIMO channel capacity is given as
 
SNR 1=2
C ¼ log2 det INR þ Rr Hw Rt HwH RrH=2 ð10:41Þ
NT

The following examples demonstrate the performance of the correlated random


MIMO channels with and without CSI known at the transmitter using MATLAB.
Example 10.3 Compare the capacity of spatially correlated random 4  4 channels
with unknown CSI at the transmitter for a non-uniform antenna array structure with
the following correlation matrices [14]
2 3
1 0:3169 0:3863 0:0838
6 0:3169 1 0:7128 0:5626 7
Rt ¼ 64 0:3863 0:7128
7;
1 0:5354 5
0:0838
2 0:5626 0:5354 1 3
1 0:1317 0:1992 0:2315
6 0:1317 1 0:1493 0:1907 7
Rr ¼ 6
4 0:1992 0:1493
7
1 0:1996 5
0:2315 0:1907 0:1996 1

and an uniform antenna array structure with rt ¼ rr ¼ 0:2:


10.5 MIMO Channel Capacity 343

Program 10.4 MATLAB program for random MIMO channel capacity

The capacity of the random MIMO channels without CSI with uniform and non-
uniform correlated matrices is shown in Fig. 10.9.
From Fig. 10.9, it is observed that the capacity of a random MIMO channel
without CSI and with uniform correlated matrices gives the better performance than
the non-uniform correlated matrices.
344 10 MIMO System

30
with non-uniform correlation matrices
with uniform correlation matrices
25

20
bps/Hz

15

10

0
0 5 10 15 20 25
SNR(dB)

Fig. 10.9 Random MIMO channel capacity without CSI with uniform and non-uniform correlated
matrices

Example 10.4 Compare the capacity of spatially correlated random 4  4 channels


with known and unknown CSI at the transmitter for an uniform antenna array
structure with rt ¼ rr ¼ 0:2:
10.5 MIMO Channel Capacity 345

Program 10.5 MATLAB program for spatially correlated 4  4 channel capacity


346 10 MIMO System

Program 10.6 MATLAB function program for water filling

The comparison of the capacity of spatially correlated 4  4 channels with


known and unknown CSI at the transmitter is shown in Fig. 10.10.
From Fig. 10.10, it is observed that the capacity of spatially correlated 4  4
channels with known CSI at the transmitter gives better performance than with
unknown CSI at the transmitter.
When NT ¼ NR and SNR is high, Eq. (10.41) can be approximated as
 
SNR
C ¼ log2 det INR þ Hw HwH þ log2 detðRt Þ þ log2 detðRr Þ ð10:42Þ
NT

From Eq. (10.42), it can be observed that the MIMO channel capacity is reduced
due to the correlation between the transmit and receive antennas and the reduction
is

log2 detðRt Þ þ log2 detðRr Þ ð10:43Þ

It is shown in [8] that the value in Eq. (10.43) is always negative by the fact that
log2 detðRt Þ  0 for any correlation matrix R.
10.5 MIMO Channel Capacity 347

30

with unknown CSI at the transmitter


with known CSI at the transmitter
25

20
bps/Hz

15

10

0
0 5 10 15 20 25
SNR(dB)

Fig. 10.10 Comparison of the capacity of spatially correlated 4  4 channels with known and
unknown CSI at the transmitter

The following example illustrates how correlation reduces the channel capacity
using MATLAB.
Example 10.5 Compare the capacity of i.i.d and correlated

random 2  2 channels

1 0:76expð0:17jpÞ
with CSI unknown at the transmitter assuming Rt ¼ 0:76expð0:17jpÞ 1
; Rr

is a 2  2 identity matrix, i.e., no correlation exists between the receive antennas.


348 10 MIMO System

Program 10.7 MATLAB program for i.i.d and correlated MIMO channels capacity

A comparison of the capacity of i.i.d and correlated 2  2 channels is shown in


Fig. 10.11. From Fig. 10.11, it is observed that the capacity of the i.i.d with
unknown CSI at the transmitter gives the better performance than correlated random
2  2 channels with unknown CSI at the transmitter.

10.6 MIMO Channel Equalization

Consider a wireless communication system in which the transmitter contains NT


antennas and the receiver possesses NR antennas. Let us consider a 2  2 MIMO
channel ðNT ¼ 2; NR ¼ 2Þ as shown in Fig. 10.12. The received signal for the
2  2 MIMO channel can be expressed as

X
2
yj ¼ hj;i si þ gj ð10:44Þ
i¼1

where hj;i ði ¼ 1; 2; j ¼ 1; 2Þ are independent and identically distribute (i.i.d) com-


plex random variables, representing the channel coefficients from the ith transmitter
antenna to the jth receiver antenna, xi are transmitted symbols, and gj are noise
samples and i.i.d complex AWGN variables.
10.6 MIMO Channel Equalization 349

15
iid 2x2 fading channels
correlated 2x2 fading channels

10
Capacity(bps/Hz)

0
0 5 10 15 20 25
SNR (dB)

Fig. 10.11 The comparison of the capacity of i.i.d and correlated 2  2 channels

Fig. 10.12 A 2  2 MIMO


channel

The Eq. (10.44) can be represented in matrix form as


   
y1 h1;1 h1;2 s1 g
¼ þ 1 ð10:45Þ
y2 h2;1 h2;2 s2 g2

equivalently,

y ¼ Hs þ g
350 10 MIMO System

10.6.1 Zero Forcing (ZF) Equalization

To solve for s, we know that we need to find a matrix W which satisfies WH ¼ I.


The zero forcing (ZF) linear detector for meeting this constraint is given by,
 1
W ¼ HH H HH ð10:46Þ

This matrix is also known as the pseudo-inverse for a general m  n matrix. The
term
" # 
h 1;1 h 2;1
h1;1 h1;2
H H¼
H
h 1;2 h 2;2 h2;1 h2;2
2     3
h1;1 2 þh2;1 2 h 1;1 h1;2 þ h 2;1 h2;2
¼4  2  2 5
h 1;2 h1;1 þ h 2;2 h2;1 h1;2  þh2;2 

10.6.2 Minimum Mean Square Error (MMSE) Equalization

The minimum mean square error (MMSE) approach tries to find W which mini-
  H 1
mizes the criterion Ef Wys Wys g. Solving W ¼ ½H H H þ N0 I  H H when the
noise term is zero, the MMSE equalizer reduces to zero forcing equalizer.

10.6.3 Maximum Likelihood Equalization

ML detection shows the best performance in all the MIMO detection algorithms. It
finds the bs , which minimizes

J ¼ jy  H^sj2 ð10:47Þ
    2
 y h1;1 h1;2 ^s1 
J ¼  1  ð10:48Þ
y2 h2;1 h2;2 ^s2 

If the modulation is BPSK, the possible value of s1 is þ1 or 1. Similarly, s2


also take values þ1 or 1. So, to find the maximum likelihood estimate, we need to
find the minimum from the all four combinations of s1 and s2 with J defined for the
four combinations as
10.6 MIMO Channel Equalization 351

   2
 y h h1;2 þ1 
Jþ1;þ1 ¼  1  1;1 ð10:49Þ
y2 h2;1 h2;2 þ1 
   2
 y h h1;2 þ1 
Jþ1;1 ¼  1  1;1 ð10:50Þ
y2 h2;1 h2;2 1 
   2
 y h h1;2 1 
J1;þ1 ¼  1  1;1 ð10:51Þ
y2 h2;1 h2;2 þ1 
   2
 y h h1;2 1 
J1;1 ¼  1  1;1 ð10:52Þ
y2 h2;1 h2;2 1 

In case of 4  4 MIMO, the Eq. (10.44) can be written as


2 3 2 32 3 2 3
y1 h1;1 h1;2 h1;3 h1;4 s1 g1
6 y2 7 6 h2;1 h2;2 h2;3 h2;4 7 6 s 2 7 6 g2 7
6 7¼6 76 7 þ 6 7 ð10:53Þ
4 y3 5 4 h3;1 h3;2 h3;3 h3;4 54 s3 5 4 g3 5
y4 h4;1 h4;2 h4;3 h4;4 s4 g4

ML detection minimizes
2 3 2 32 32
 y1 h1;1 h1;2 h1;3 h1;4 ^s1 

6 y2 7 6 h2;1 h2;2 h2;3 h2;4 76 ^s2 7
J ¼ 6 7 6
4 5  4 h3;1
76 7 ð10:54Þ
 y3 h3;2 h3;3 h3;4 54 ^s3 5
 y4 h4;1 h4;2 h4;3 h4;4 ^s4 

with BPSK modulation for maximum likelihood estimate, we need to calculate


minimum from all sixteen combinations of s1 ; s2 ; s3 and s4 .
The performance comparison of MIMO channel equalization using ZF, MMSE,
and ML is shown in Fig. 10.13. From Fig. 10.13, it is observed that the performance
of MMSE is better than ZF and the performance of ML is better than both the
MMSE and ZF.

10.7 Problems

1. Find a parallel channel model for a MIMO system, the H matrix of which is
given by
2 3
0:8 0:5  j0:2 0:3 þ j0:6
H ¼ 4 0:4  j0:6 1:0  j0:1 0:2  j0:9 5
0:5 þ j0:3 0:5 þ j1:5 0:6 þ j1:2
352 10 MIMO System

0
10
ZF
MMSE
ML
-1
10
BER

-2
10

-3
10

-4
10
0 5 10 15
Eb/No(dB)

Fig. 10.13 Performance comparison of MIMO channel equalization using ZF, MMSE, and ML

2. Given the following (3  3 MIMO) channel, find the capacity of this channel,
with known CSI at the receiver, unknown CSI at the transmitter, SNR ¼ 20 dB,
and bandwidth equal to 2 kHz. Compare this capacity calculation to that using
SVD.
2 3
0:8 0:5 0:3
H ¼ 4 0:4 1:0 0:2 5
0:5 0:5 0:6

3. Consider a MIMO channel with two transmit antennas and one receive antenna.
Assume zero-mean unit variance AWGN and an average power constraint of
one per antenna. The path gains from the first and second transmit antennas to
the receiver antenna are h1 ¼ 0:5 and h2 ¼ 0:5 þ j1:5, respectively.
(a) What is the channel capacity?
(b) What is the channel capacity if CSI is known at the transmitter and the
average power constraint is 2 over sum of the transmission powers from
both the antennas?
4. Assuming total power is 1 W, noise power is equal to 0.1 W, and the signal
bandwidth is 50 kHz, find the channel capacity and optimal power allocation for
MIMO channel, the H matrix of which is given by
10.7 Problems 353

2 3
0:8 0:5  j0:2 0:3 þ j0:6
H ¼ 4 0:4  j0:6 1:0  j0:1 0:2  j0:9 5
0:5 þ j0:3 0:5 þ j1:5 0:6 þ j1:2

10.8 MATLAB Exercises

1. Write a MATLAB program for the simulations to estimate the achievable


information capacity for BPSK input and QPSK input over a MIMO system, the
H matrix of which is given by
2 3
0:8 0:5  j0:2 0:3 þ j0:6
H ¼ 4 0:4  j0:6 1:0  j0:1 0:2  j0:9 5
0:5 þ j0:3 0:5 þ j1:5 0:6 þ j1:2

2. Write a MATLAB program to plot the ergodic channel capacity of a 2  2


MIMO system over ergodic
 Rayleigh fading channel with transmit correlation
1 r
matrix Rt ¼ and receive correlation matrix Rr ¼ Rt for
r 1
r ¼ 0; 0:5; 0:6; 0:8.

References

1. Murch, R.D., Letaief, K.B.: Antenna systems for broadband wireless access. IEEE Commun.
Mag. 40, 76–83 (2002)
2. Bölcskei, H., Paulraj, A.: Space-frequency coded broadband OFDM systems. In: Proceedings
of IEEE WCNC, Chicago, pp. 1–6, 23–28 Sept 2000
3. Lu, B., Wang, X.: Space-time code design in OFDM systems. In: Proceedings of IEEE Global
Communications Conference, pp. 1000–1004 (2000)
4. Tufvesson, F., Maseng, T.: Pilot assisted channel estimation for OFDM in mobile cellular
systems. In: IEEE VTC‘97, vol. 3, pp. 1639–1643 (1997)
5. Heiskala, J., Terry, J.: OFDM Wireless LANs: A Theoretical and Practical Guide. SAMS,
Carmel (2002)
6. van Nee, R., Prasad, R.: OFDM for Wireless Multimedia Communications. Artech House
Publishers, London (2000)
7. Lau, H.K., Cheung, S.W.: A pilot symbol aided technique used for digital signals in multipath
environments. In: IEEE ICC‘94, vol. 2, pp. 1126–1130 (1994)
8. Cho, Y.S., Kim, J., Yang, W.Y., Kang, C.G.: MIMO–OFDM Wireless Communications with
MATLAB. Wiley, Hoboken (2010)
9. Telatar, I.: Capacity of multi antenna Gaussian channels. Eur. Trans. Tel. 10(6), 585–595
(1999)
354 10 MIMO System

10. Ertel, R.B., Cardieli, P., Sowerby, K.W., Rapport, T.S., Reed, J.H.: Overview of spatial
channel models for antenna array communication systems. IEEE Pers. Commun. 5(1), 10–22
(1998)
11. Kermoal, J.P., Schumacher, L., Pederson, K.I., Modensen, P.E., Fredriksen, F.: A stochastic
MIMO radio channel model with experimental validation. IEEE J. Sel. Areas Commun. 20(1),
1211–1226 (2002)
12. Gesbert, D., Bolcskei, H., Gore, D., Paulraj, A.: Outdoor MIMO channels: models and
performance prediction. IEEE Trans. Commun. 50(12), 1926–1934 (2002)
13. Loyka, S., Tsoulos, G.: Estimating MIMO system performance using the correlation matrix
approach. IEEE Commun. Lett. 6(1), 19–21 (2002)
14. van Zelst, A., Hammerschmidt, J.S.: A single coefficient spatial correlation model for multiple-
input multiple-output (MIMO) radio channels. In: Proceedings of URSI 27th General Assembly
Maastricht, Netherlands, pp. 657–660 (2002)
15. Durgin, G.D., Rappaport, T.S.: Effects of multipath angular spread on the spatial cross-
correlation of received voltage envelopes. In: 49th IEEE Vehicular Technology Conference
(VTC), vol. 2, pp. 996–1000 (1999)
Chapter 11
Space–Time Coding

In MIMO systems, diversity can be achieved by repetition coding in which different


antennas at the transmitter transmit the same information at different time slots. The
space–time (ST) coding is more bandwidth-efficient coding scheme, which trans-
mits an information symbol block in a different order from each antenna. The
diverse copies of the data transmitted are received with multiple receiving antennas.
All the copies of the received signal are combined in an optimal way to extract
information from each of them. This chapter describes different space–time coding
schemes and analyzes their performance in Rayleigh fading.

11.1 Space–Time-Coded MIMO System

A ST-coded MIMO system with NT transmit antennas and NR receive antennas is


shown in Fig. 11.1. In this MIMO system, the bit stream is mapped into a symbol
stream Si ; i ¼ 1; . . .; N. The N symbols are ST encoded into sij ; i ¼
1; 2; . . .; NT ; j ¼ 1; 2; . . .; T, where i represents antenna index and j stands for the
symbol time index. Thus, sij ; i ¼ 1; 2; . . .; NT ; j ¼ 1; 2; . . .; T forms a ST code word
with the number of symbols N = NT · T.
Space–time codes are categorized as space–time block codes (STBC) and ST
trellis codes (STTC). Sections 11.2 through 11.5 discuss these codes. The perfor-
mance of STTCs is better than that of STBCs. However, STTCs’ complexity is
more due to the maximum likelihood (ML) decoder in the receiver.

Electronic supplementary material The online version of this chapter (doi:10.1007/978-81-


322-2292-7_11) contains supplementary material, which is available to authorized users.

© Springer India 2015 355


K. Deergha Rao, Channel Coding Techniques for Wireless Communications,
DOI 10.1007/978-81-322-2292-7_11
356 11 Space–Time Coding

Space- Space-
time time
encoder decoder

Fig. 11.1 Space–time-coded MIMO system

11.2 Space–Time Block Code (STBC)

An STBC is represented by the following code matrix S, in which each row rep-
resents one antenna’s transmissions over time and each column represents a time
slot.
2 3
s11 s12  s1T
6 s21 s22  s2T 7
6 7
S¼6 .. .. .. 7 ð11:1Þ
4 . . . 5
s NT 1 sNT 2    s NT T

where sij is the modulated symbol to be transmitted from antenna i in time slot j. T
and NT represent time slots and transmit antennas, respectively. The code rate of an
STBC is defined as how many symbols per time slot it transmits on average. If k
symbols are transmitted over T time slots, the code rate of STBC is

k
r¼ ð11:2Þ
T

The matrix of STBC is to be designed so that it achieves highest possible


diversity of NT NR and highest possible code rate with minimum complexity of the
decoder.
11.2 Space–Time Block Code (STBC) 357

11.2.1 Rate Limit

It is proved in [1] that a code with NT transmit antennas will yield the highest rate
given by

a0 þ 1
rmax ¼ ð11:3Þ
2a0

where NT ¼ 2a0 or NT ¼ 2a0  1

11.2.2 Orthogonality

STBC is to be designed such that any pair of columns taken from the code matrix is
orthogonal in order to make the decoding process at the receiver to be simple,
linear, and optimal. However, a code that satisfies this criterion must sacrifice a part
of its rate.

11.2.3 Diversity Criterion

Orthogonal STBCs can be shown to achieve the maximum diversity allowed by


diversity criterion derived in [2]. Consider a code word

c ¼ c11 c21 ; . . .; cNT 1 c12 c22 ; . . .; cNT 2 ... c1T c2T ; . . .; cNT T ð11:4Þ

and let the corresponding erroneously decoded code word

~c ¼ ~c11~c21 ; . . .; ~cNT 1 ~c12~c22 ; . . .; ~cNT 2 . . . ~c1T ~c2T ; . . .; ~cNT T ð11:5Þ

Then, the NT  T difference matrix E ðc; ~cÞ can be defined as


2 3
~c11  c11 ~c12  c12  ~c1T  c1T
6 ~c21  c21 ~c22  c22  ~c2T  c1T 7
6 7
E ðc; ~cÞ ¼ 6 .. .. .. .. 7 ð11:6Þ
4 . . . . 5
~cNT 1  cNT 1 ~cNT 2  cNT 2    ~cNT T  cNT T

Rank and determinant criteria


Let qðq  NT Þ be the rank of difference matrix E ðc; ~cÞ. The Eðc; ~cÞ should be full-
rank matrix for any pair of distinct code words c and ~c to yield maximum possible
diversity order of NT NR . Instead, if E ðc; ~cÞ has minimum rank q over the set distinct
358 11 Space–Time Coding

code word pairs, then diversity order is qNR [2]. Consider the following distance
matrix

Aðc; ~cÞ ¼ Eðc; ~cÞE  ðc; ~cÞ ð11:7Þ

where E  ðc; ~cÞ denotes the transpose conjugate of Eðc; ~cÞ.


The determinant criterion states that the minimum determinant of Aðci ; c j Þ ¼
H
E ðci ; c j Þ Eðci ; c j Þ among all i 6¼ j should be large to achieve high coding gains.
Trace criterion
A good design criterion is to maximize the minimum distance jjE ðc; ~cÞjjF for all
i 6¼ j. This is called the trace criterion because jjEðc; ~cÞjj2F ¼ Tr½Aðc; ~cÞ. The metric
jjE ðc; ~cÞjjF provides all the good properties of a distance measure.

11.2.4 Performance Criteria

The rank of A is q, the kernel of A has a minimum dimension NTx  q, and exactly
NTx  q eigenvalues of A are zero. The nonzero eigenvalues of A can be denoted by
k1 ; k2 ; k3 ; . . .; kq . Assuming perfect channel state information (CSI), the probability
of transmitting c and deciding in favor of ~c at the decoder is given by [3, 4]
 
   Es
P c ! ~chij ; i ¼ 1; 2; . . .; NTx ; j ¼ 1; 2; . . .; NRx  exp d 2 ðc; ~cÞ ð11:8Þ
4N0
N0
where 2 is the noise variance per dimension and
 2
NRx X
X T XNTx
  
 
d 2 ðc; ~cÞ ¼  hij cit  ~cit  ð11:9Þ

j¼1 t¼1 i¼1


is the Euclidean distance.


It follows from [3] that the pairwise error bound is given by
!NRx  qNRx
Y
q
Es
Pðc ! ~cÞ  ki ð11:10Þ
i¼1
4N0

To achieve the best performance for a given system, the rank and determinant
criteria should be satisfied [4].
11.2 Space–Time Block Code (STBC) 359

11.2.5 Decoding STBCs

One particular attractive feature of orthogonal STBCs is that ML decoding can be


achieved at the receiver with only linear processing. In order to consider a decoding
method, a model of the wireless communication system is needed.
At time t, the signal yjt received at antenna j is

X
NT
yjt ¼ hij sit þ gjt ð11:11Þ
i¼1

where hij is the path gain from transmit antenna i to receive antenna j, sit is the
signal transmitted by transmit antenna i, and gjt is the additive white Gaussian noise
(AWGN).
The decision variables are formed by the maximum likelihood detection rule [7]
NT X
X NR
Yi ¼ yjt hkðiÞj dk ðiÞ ð11:12Þ
t¼1 j¼1

where dk ðiÞ is the sign of si in the kth row of the coding matrix, k ð pÞ ¼ q denotes
that sp is (up to a sign difference), the ðk; qÞ element of the coding matrix, for
i ¼ 1; 2; . . .; nT and then decides on constellation symbol si that satisfies
! !
X
2
si ¼ arg min jYi  sj þ 1 þ jhkl j jsj2
2
ð11:13Þ
sA
k;l

with A the constellation alphabet. Despite its appearance, this is a simple, linear
decoding scheme that provides maximal diversity.

11.3 Alamouti Code

The very first and well-known STBC is the Alamouti code [6]. In the Alamouti
encoder, two consecutive symbols s1 and s2 are encoded with the following ST
code word matrix:
 
s s2
S¼ 1 ð11:14Þ
s2 s1

This indicates that during the first time slot, signals s1 and s2 are transmitted from
antenna 1 and antenna 2, respectively. During the next time slot, antenna 1 and
antenna 2 transmit s2 and s1 , respectively. It is only STBC in which maximum
diversity can be achieved without sacrificing its data rate because Alamouti code has
rate 1 as it takes two time slots to transmit two symbols.
360 11 Space–Time Coding

11.3.1 2-Transmit, 1-Receive Alamouti STBC Coding

For Alamouti Scheme with two transmit and one receive antennas shown in
Fig. 11.2, if y1 and y2 denote the signals received at first time slot and second time
slot, respectively, we have
 
s1 s2
½ y1 y 2  ¼ ½ h1 h2  ½ g1 g2 
s2 s1
¼ ½ h1 s 1 þ h2 s 2 þ g1 h1 s2 þ h2 s1 þ g2  ð11:15Þ

where s1 ; s2 are the transmitted symbols, h1 is the channel from first transmit
antenna to receive antenna, h2 is the channel from second transmit antenna to
receive antenna, and g1 ; g2 are the noise at time slot 1 and time slot 2.
The combiner generates [2].

~s1 ¼ h1 y1 þ h2 y2 ð11:16Þ

and

~s2 ¼ h2 y1  h1 y2 ð11:17Þ

To decode, the ML decoder minimizes the following decision metric (for


decoding) s1 and s2 , respectively [5].

j~s1  s1 j2 þnjs1 j2 ð11:18Þ

j~s2  s2 j2 þnjs2 j2 ð11:19Þ

Tx1
T C
r o
a Rx
n m
s b
m i
i Tx2 n
t e
t r
e
r

Fig. 11.2 Alamouti scheme with two transmit and one receive antennas
11.3 Alamouti Code 361

where
!
X
Nt
n¼ 1 þ 2
jhi j ð11:20Þ
i¼1

BER with Alamouti (2 × 1) STBC


From Eq. (2.44), BER for a 2-branch MRC (i.e., with one transmitting and two
receiving antennas) with BPSK modulation can be expressed as

BERMRCð12Þ ¼ p2MRC ½1 þ 2ð1  pMRC Þ ð11:21Þ

where
 1=2
1 1 1
pMRC ¼  1þ ð11:22Þ
2 2 Eb =N0

Then, the BER for the Alamouti 2-transmit, 1-receive antenna STBC case with
BPSK modulation can be written as

BERAlamoutið21Þ ¼ p2Alamouti ½1 þ 2ð1  pAlamouti Þ ð11:23Þ

where
 1=2
1 1 2
pAlamouti ¼  1þ ð11:24Þ
2 2 Eb =N0

It can be easily shown [6] that the performance of the Alamouti scheme with two
transmitters and a single receiver is identical to that of the two-branch MRC pro-
vided that each transmit antenna in the Alamouti scheme radiates the same energy
as the single transmit antenna for MRC.

11.3.2 2-Transmit, 2-Receive Alamouti STBC Coding

For Alamouti Scheme with two transmit and two receive antennas shown in
Fig. 11.3, if y11 , y12 , y21 , and y22 denote the signals received by antenna 1 at first
time slot, by antenna 1 at second time slot, by antenna 2 at first time slot, and by
antenna 2 at second time slot, respectively, we have
      
y11 y12 h h12 s1 s2 g g12
¼ 11 þ 11 ð11:25Þ
y21 y22 h21 h22 s2 s1 g21 g22
362 11 Space–Time Coding

Tx1 Rx1
T
r C
a o
n m
s b
m Tx2 Rx2 i
i n
t e
t r
e
r

Fig. 11.3 Alamouti scheme with two transmit and two receive antennas

 
h11 s1 þ h12 s2 þ g11 h11 s2 þ h12 s1 þ g21
¼ ð11:26Þ
h21 s1 þ h22 s2 þ g21 h21 s2 þ h22 s1 þ g22

where hij is the channel from ith transmit antenna to jth receive antenna, s1 ; s2 are
 
g11
the transmitted symbols, are the noise at time slot 1 on receive antennas 1
g
 12
g
and 2, respectively, and 21 are the noise at time slot 2 on receive antennas 1 and
g22
2, respectively.
The combiner generates [2]

~s1 ¼ h11 y11 þ h12 y12 þ h21 y21 þ h22 y22 ð11:27Þ

and

~s2 ¼ h12 y11  h11 y12 þ h22 y21  h22 y22 ð11:28Þ

To decode, the ML decoder minimizes the following decision metric (for


decoding) s1 and s2 , respectively [5].

j~s1  s1 j2 þnjs1 j2 ð11:29Þ

j~s2  s2 j2 þnjs2 j2 ð11:30Þ

where
!
X
Nr X
Nt  
n¼ 1 þ hi;j 2 ð11:31Þ
i¼1 j¼1
11.3 Alamouti Code 363

BER with Alamouti ð2  2Þ STBC


From Eq. (2.44), BER for 4-branch MRC (i.e., with one transmitting and four
receiving antennas) with BPSK modulation can be expressed as
h i
BERMRCð14Þ ¼ p4MRc 1 þ 4ð1  pMRC Þ þ 10ð1  pMRC Þ2 þ20ð1  pMRC Þ3 ð11:32Þ

BER for Alamouti (2 × 2) STBC case with BPSK modulation can be written as
h i
BERAlamoutið22Þ ¼ p4Alamouti 1 þ 4ð1  pAlamouti Þ þ 10ð1  pAlamouti Þ2 þ20ð1  pAlamouti Þ3

ð11:33Þ

11.3.3 Theoretical BER Performance of BPSK Alamouti


Codes Using MATLAB

The following MATLAB program illustrates the BER performance of uncoded


coherent BPSK for MRC and Alamouti STBC in Rayleigh fading channel.

Program 11.1 BER performance of MRC and Alamouti coding

BER performance obtained by using Program 11.1 is shown in Fig. 11.4.


364 11 Space–Time Coding

0
10
SISO (no diversity)
Alamouti(2x1)
-1 MRC(1x2)
10
Alamouti(2x2)
MRC(1x4)
-2
10
BER

-3
10

-4
10

-5
10

-6
10
0 2 4 6 8 10 12 14 16 18 20
Eb/No(dB)

Fig. 11.4 BER performance comparison of coherent BPSK with MRC and Alamouti STBC

From Fig. 11.4, the performance of Alamouti 2  1 STBC and Alamouti 2  2


STBC is 3 dB worse as compared to 1  2 MRC and 1  4 MRC, respectively. The
3-dB penalty is due to the assumption that each transmit antenna in the Alamouti
STBC scheme radiates half the energy in order to ensure the same total radiated
power as with one transmit antenna of MRC. If each transmit antenna in Alamouti
scheme radiates the same energy as the single transmit antenna for MRC, then the
performance of Alamouti scheme and MRC is identical.

11.4 Higher-Order STBCs

The Alamouti scheme discussed in Sect. 11.3 is part of a general class of STBCs
known as orthogonal space–time block codes (OSTBCs) [2]. It is proved in [5, 7]
that no code for more than 2 transmit antennas can achieve full rate. This section
briefly discusses the full diversity complex orthogonal codes for NT [ 2:
3 transmit antennas
The full diversity, rate 1/2 code for NT ¼ 3 is given by [5, 7]: This code transmits 4
symbols every 8 time intervals and therefore has rate 1/2.
11.4 Higher-Order STBCs 365

2 3
S1 S2 S3 S4 S1 S2 S3 S4
G 3 ¼ 4 S2 S1 S4 S3 S2 S1 S4 S3 5 ð11:34Þ
S3 S4 S1 S2 S3 S4 S1 S2

4 transmit antennas
In the case of 4 transmit antennas, the rate 1/2 code block is given by [5, 7], where
similar to Eq. (11.34) has rate 1/2 as 4 symbols are transmitted in 8 time intervals
2 3
s1 s2 s3 s4 s1 s2 s3 s4
6 s2 s1 s4 s3 s2 s1 s4 s3 7
G4 ¼ 6
4 s3
7 ð11:35Þ
s4 s1 s2 s3 s4 s1 s2 5
s4 s3 s2 s1 s4 s3 s2 s1

11.4.1 3-Transmit, 4-Receive STBC Coding

A STBC scheme with three transmit and four receive antennas is shown in
Fig. 11.5. If y11 ; y12 ; . . .; y18 , y21 ; y22 ; . . .; y28 , y31 ; y32 ; . . .; y38 , and y41 ; y42 ; . . .; y48
denote the signals received by antenna 1, antenna 2, antenna 3, and antenna 4 at

Tx1 Rx1

T
r C
a Rx2
o
n Tx2 m
s
b
m Rx3
i i
t n
t Tx3
Rx4
e
e
r r

Fig. 11.5 STBC scheme with three transmit and four receive antennas
366 11 Space–Time Coding

time slots 1, 2, …, 8, respectively, and hij ði ¼ 1; 2; 3; j ¼ 1; 2; 3; 4Þ are path gains


from antenna i to antenna j, we have
2 3
y11 y12 y13 y14 y15 y16 y17 y18
6 7
6 y21 y22 y23 y24 y25 y26 y27 y28 7
6 7
4 y31 y32 y33 y34 y35 y36 y37 y38 5
y41
y42 y43 y44 y45 y46 y47 y48
2 3
h11 h21 h31 2 3
6 h 7 S1 S2 S3 S4 S1 S2 S3 S4
6 12 h22 h32 76 7
¼6 74 S2 S1 S4 S3 S2 S1 S4 S3 5
4 h13 h23 h33 5
S3 S4 S1 S2 S3 S4 S1 S2
h14 h24 h34
2 3
g11 g12 g13 g14 g15 g16 g17 g18
6 g 7
6 21 g22 g23 g24 g25 g26 g27 g28 7
þ6 7 ð11:36Þ
4 g31 g32 g33 g34 g35 g36 g37 g38 5
g41 g42 g43 g44 g45 g46 g47 g48

The Decoding Algorithm


ML decoding of any space–time block code can be achieved using only linear
processing at the receiver, and we illustrate this by example. The space–time block
code G3 has s1 ; s2 ; s3 and s4 and their conjugates. These symbols are transmitted
simultaneously from antennas one, two, and three, respectively. Then, ML detection
amounts to minimizing the decision metric
m 
X   
y1j  h1j s1  h2j s2  h3j s3 2 þy2j þ h1j s2  h2j s1 þ h3j s4 2
j¼1
 2  2
þ y3j þ h1j s3  h2j s4  h3j s1  þy4j þ h1j s4 þ h2j s3  h3j s2 
 2  2
þ y1j  h1j s1  h2j s2  h3j s3  þy6j þ h1j s2  h2j s1 þ h3j s4 
 2  2

þy7j þ h1j s  h2j s  h3j s  þy8j þ h1j s þ h2j s  h3j s 


3 4 1 4 3 2 ð11:37Þ

over all possible values of s1 ; s2 ; s3 and s4 . Note that due to the quasi-static nature of
the channel, the path gains are constant over transmissions. The minimizing values
are the receiver estimates of s1 ; s2 ; s3 and s4 , respectively. We expand the above
metric and delete the terms that are independent of the code word and observe that
the above minimization is equivalent to minimizing
11.4 Higher-Order STBCs 367

m h
X
 y1j h1j s1 þ y1j h2j s2 þ y1j h3j s3  y2j h1j s2 þ y2j h2j s1  y2j h3j s4
j¼1

 y3j h1j s3 þ y3j h2j s4 þ y3j h3j s1  y4j h1j s4  y4j h2j s3 þ y4j h3j s2
         
þ y5j h1j s1 þ y5j h3j s3  y6j h1j s2 þ y6j h2j s1  y6j h3j s4
         
 y7j h1j s3 þ y7j h2j s4 þ y7j h3j s1  y8j h1j s4  y8j h2j s3
  i
X m X 3  
þ y8j h3j s2 þ js1 j2 þjs2 j2 þjs3 j2 þjs4 j2 hij 2 ð11:38Þ
j¼1 i¼1

The above metric decomposes into four parts; the function of s1 is


m
X    
 y1j h1j s1 þ y2j h2j s1 þ y3j h3j s1 þ y5j h1j s1 þ y6j h2j s1
j¼1

  
X
m X
3  
þ y7j h3j s1 þ js1 j2 ai;j 2 ð11:39Þ
j¼1 i¼1

The function of s2 is
m
X    
 y1j h2j s2  y2j h1j s2 þ y4j h3j s2 þ y5j a2j s2  y6j a1j s2
j¼1

  
X
m X
3  
þ y8j a3j s2 þ js2 j2 hij 2 ð11:40Þ
j¼1 i¼1

The function of s3 is
m
X    
 y1j h3j s3  y3j h1j s3  y4j h2j s3 þ y5j h3j s3  y7j h1j s3
j¼1

  
X
m X
3  
 y8j h2j s3 þ js3 j2 hij 2 ð11:41Þ
j¼1 i¼1

The function of s4 is
m
X    
 y2j h3j s4 þ y3j h2j s4  y4j h1j s4 þ y6j h3j s4 þ y7j h2j s4
j¼1

  
X
m X
3  
 y8j h1j s4 þ js4 j2 hij 2 ð11:42Þ
j¼1 i¼1
368 11 Space–Time Coding

Thus, the minimization of (11.38) is equivalent to minimizing these four parts


separately. This in turn is equivalent to minimizing; the decision metric for
detecting s1 is
 2
X m      

    
 y1j h1j þ y2j h2j þ y3j h3j þ y5j h1j þ y6j h2j þ y7j h3j  s1 
 j¼1 
!
X m X 3  
þ 1 þ 2 hij 2 js1 j2 ð11:43Þ
j¼1 i¼1

The decision metric for detecting s2 is


 2
X m      

    
 y1j h2j  y2j h1j þ y4j h3j þ y5j h2j  y6j h1j þ y8j h3j  s2 
 j¼1 
!
X m X 3  
þ 1 þ 2 hij 2 js2 j2 ð11:44Þ
j¼1 i¼1

The decision metric for detecting s3 is


 2
X m      

    
 y1j h3j  y3j h1j  y4j h2j þ y5j h3j  y7j h1j  y8j h2j  s3 
 j¼1 
!
X m X 3  
þ 1 þ 2 hij 2 js3 j2 ð11:45Þ
j¼1 i¼1

The decision metric for detecting s4 is


 2
X m      

    
 y2j h3j þ y3j h2j  y4j h1j  y6j h3j þ y7j h2j  y8j h1j  s4 
 j¼1 
!
Xm X 3  
þ 1 þ 2 aij 2 js4 j2 ð11:46Þ
j¼1 i¼1
11.4 Higher-Order STBCs 369

11.4.2 Simulation of BER Performance of STBCs Using


MATLAB

The following MATLAB Program 11.2 and MATLAB function Programs 11.3,
11.4, 11.5, and 11.6 are used to simulate the BER performance of QPSK and 16-
QAM for STBC ð3  4Þ, Alamouti ð2  2Þ, and Alamouti ð2  1Þ.

Program 11.2 “STBC simulation.m” for BER performance comparison of STBC


ð3  4Þ, Alamoutið2  2Þ, and Alamouti ð2  1Þ in Rayleigh fading channel
370 11 Space–Time Coding

Program 11.3 MATLAB function for stbc2by1

Program 11.4 MATLAB function for stbc2by2


11.4 Higher-Order STBCs 371

Program 11.5 MATLAB function for stbc3by4


372 11 Space–Time Coding

Program 11.6 MATLAB function for BPSK, QPSK and 16-QAM mapping

The BER performance obtained by using above programs for QPSK and 16-
QAM with STBC ð3  4Þ, Alamouti ð2  2Þ, and Alamouti ð2  1Þ are shown in
Figs. 11.6 and 11.7, respectively.
From Figs. 11.6 and 11.7, it can be seen that the BER performance of STBC
ð3  4Þ is better than Alamouti ð2  2Þ and Alamouti ð2  1Þ.

11.5 Space–Time Trellis Coding

In contrast to STBCs, STTCs provide both coding gain and diversity gain and have
a better bit error rate performance. However, STTCs are more complex than STBCs
to encode and decode.
In [2], Tarokh et al. derived the design criteria for STTCs over slow-frequency
non-selective fading channels. The design criteria were shown to be determined by
the distance matrices constructed from pairs of distinct code words. The minimum
rank of the distance matrices was used to determine the diversity gain, and the
minimum distance of the distance matrices was used to determine the coding gain
[3]. The system model for STTC modulation is shown in Fig. 11.8.
11.5 Space–Time Trellis Coding 373

0
10
Alamouti(2x1)
Alamouti(2x2)
-1
10 STBC(3x4)

-2
10

-3
BER

10

-4
10

-5
10

-6
10
0 2 4 6 8 10 12 14 16 18 20
Eb/No(dB)

Fig. 11.6 BER performance for QPSK with STBC (3 × 4), Alamouti (2 × 2), and Alamouti
(2 × 1)

11.5.1 Space–Time Trellis Encoder


2 3
It1
6 7
6 It2 7
6 7
Let It ¼ 6 .. 7 denote input data symbol of m ¼ log2 M bits, which is input to the
6 . 7
4 5
Itm
encoder at time t ¼ 0; 1; 2; . . .; then, a sequence of input data symbols is repre-
sented as
2 3
I01 I11  It1   
6 7
6 I02 I12  It2    7
6 7
I¼6 .. .. .. .. 7 ð11:47Þ
6 7
4 . . . . 5
I0m I1m    Itm   
374 11 Space–Time Coding

0
10
Alamouti(2x1)
Alamouti(2x2)
-1
10 STBC(3x4)

-2
10

-3
BER

10

-4
10

-5
10

-6
10
0 2 4 6 8 10 12 14 16 18 20
Eb/No(dB)

Fig. 11.7 BER performance for 16-QAM with STBC (3 × 4), Alamouti (2 × 2), and Alamouti
(2 × 1)

Infor- Space- Space- Infor-


mation Time Time mation
source Trellis Trellis Sink
Encoder Decoder

Fig. 11.8 Space–time trellis code system model


11.5 Space–Time Trellis Coding 375

The STTC encoder can be considered as a convolutional encoder with the


memory size of vk delay units for the kth branch for each output symbol. Let fvk gm
k¼1
denote the size of memory used to store the kth branch metrics that is calculated as

vþk1
vk ¼ ð11:48Þ
log2 M

where bxc denotes the largest integer smaller than x. v is the size of total required
memory for the ST trellis code, that is,

X
m
v¼ vk ð11:49Þ
k¼1

Then, the output of the STTC encoder is specified by the following generator
polynomials:
h


i
a1 ¼ a10;1 ; a10;2 ; . . .; a10;Nr ; a11;1 ; a11;2 ; . . .; a11;Nr ; . . .; a1v1 ;1 ; a1v1 ;2 ; . . .; a1v1 ;Nr
h


i
a2 ¼ a20;1 ; a20;2 ; . . .; a20;Nr ; a21;1 ; a21;2 ; . . .; a21;Nr ; . . .; a2v2 ;1 ; a2v2 ;2 ; . . .; a2v2 ;Nr
h


i
am ¼ am ; a
0;1 0;2
m
; . . .; a m
0;Nr ; a m
; a
1;1 1;2
m
; . . .; a m
1;Nr ; . . .; a m
; a m
vm ;1 vm ;2 ; . . .; a m
vm ;Nr

ð11:50Þ

where akj;i denotes M-PSK symbols, k ¼ 1; 2; . . .; m; j ¼ 1; 2; . . .; vk ; i ¼ 1; 2; . . .; NT .


Let yit denote the outputs of the STTC encoder for the ith transmit antenna at time t,
i ¼ 1; 2; . . .:; NT ; which are given as
m X
X vk
Xti ¼ akj;i Itjk mod M ð11:51Þ
k¼1 j¼0

Space–time trellis-encoded M-PSK symbols are now expressed as


2 3
x10 x11  x1t   
6 7
6 x20 x21  x2t    7
6 7
X ¼ ½X0 X1 . . .Xt . . . ¼ 6 .. .. .. .. 7 ð11:52Þ
6 . . 7
4 . . 5
xN0 T xN1 T    xNt T   

where Xt ¼ ½x1t x2t . . .xNt T T is the output of the encoder that is composed of NT M-
PSK symbols, t ¼ 0; 1; 2; . . .. Figure 11.9 shows an example of the STTC encoder
for NT ¼ 2, m ¼ 3, and v ¼ 3.
Some of the coefficients for 4-PSK STTC and 8-PSK STTC codes [8] are
summarized in Tables 11.1 and 11.2, respectively.
376 11 Space–Time Coding

The Viterbi algorithm can be used for decoding the space–time trellis-coded
systems. In the Viterbi algorithm, the branch metric is given by the following
squared Euclidian distance:

a
a
a

a a
a

I a

I a

a
a
a

Fig. 11.9 8-state, 8-PSK encoder structure

Table 11.1 Coefficient pairs for 4PSK, 4-, 8-, and 16-state STTC
V ða10;1 ; a10;2 Þ ða11;1 ; a11;2 Þ ða12;1 ; a12;2 Þ ða20;1 ; a20;2 Þ ða21;1 ; a21;2 Þ ða22;1 ; a22;2 Þ det(v) tr(v)
2 (0, 2) (2, 0) – (0, 1) (1, 0) – 4 4
3 (0, 2) (2, 0) – (0, 1) (1, 0) (2, 2) 12 8
4 (0, 2) (2, 0) (0, 2) (0, 1) (1, 2) (2, 0) 12 8

Table 11.2 Coefficient pairs for 8PSK, 8-state STTC


V ða10;1 ; a10;2 Þ ða11;1 ; a11;2 Þ ða20;1 ; a20;2 Þ ða21;1 ; a21;2 Þ ða30;1 ; a30;2 Þ ða31;1 ; a31;2 Þ det(v) tr(v)
3 (0, 4) (4, 0) (0, 2) (2, 0) (0, 1) (5, 0) 2 4
11.5 Space–Time Trellis Coding 377

 2
XT X NR  
 j X
NT
i
 yt  hj;i xt  ð11:53Þ
t¼1 j¼1
 i¼1


where ytj is the received signal at the jth receive antenna during tth symbol period
and hj;i is the channel gain between the ith transmit antenna and jth receive antenna.
Using the branch metric in Eq. (11.53), a path with the minimum accumulated
Euclidian distance is selected for the detected sequence of transmitted symbols.

11.5.1.1 4-State QPSK Space–Time Trellis Encoder

STTCs can be represented and analyzed in their trellis form or by their generator
matrix, G. For example, consider the 4-PSK signal constellation shown in
Fig. 11.10a, where the signal points are labeled as 0, 1, 2, and 3.
The 4-State trellis structure is shown in Fig. 11.10b for a rate of 2 b/s/Hz.
The input signal can take on any value from the signal constellation (in this case
0, 1, 2, or 3); they are shown on the trellis diagram on the transition branches. In
general, for each state, the first transition branch to state 0 results from input 0, the

Fig. 11.10 a 4PSK signal


constellation, b 4-state,
4-PSK trellis diagram
378 11 Space–Time Coding

second transition branch to state 1 results from input 1, and so on. The output
depends on the input and on the current state. The states are labeled on the right.
The labels on the left of the trellis represent the possible outputs from that state. The
leftmost output is assumed to be the output for the first trellis branch for that
particular state, and the second leftmost label is assumed to be the output for the
second trellis branch for the same state, and so on. These assumptions were verified
to be correct and can be manually traced through the encoder structure.
It was proved in [2] that the above code provides a diversity gain of 2 (assuming
one receive antenna), and has a minimum determinant of 2 [4].
The encoder structure for the 4-state ðv ¼ 2Þ trellis, QPSK scheme with two
transmit antennas is shown in Fig. 11.11.
At time t, two binary inputs It1 and It2 are fed into the branches of the encoder
with It1 being the MSB. The memory order of the upper and lower branches is V1
and V2 , respectively, where V ¼ V1 þ V2 , and hence, the number of states is 2V . Vi
is calculated as

V þi1
Vi ¼ ; i ¼ 1; 2 ð11:54Þ
2

where bX c denotes the largest integer smaller than or equal to X. For each branch,
the output is the sum of the current input scaled by a coefficient and the previous
input scaled by another coefficient. Each of the different coefficients in the coeffi-
cient pairs, (0, 2), (2, 0), (0, 1), and (1, 0), applied to It1 and It2 , respectively.

0
2 0

0
1 0

Fig. 11.11 4-state, 4-PSK encoder structure


11.5 Space–Time Trellis Coding 379

By using Eq. (11.52), we can get the output values as follows


 1 
Xt1 ¼ 2It1 þ It1
2
mod 4 ð11:55Þ
 
Xt2 ¼ 2It1 þ It2 mod 4 ð11:56Þ

Xt1 and Xt2 are transmitted simultaneously on the first and second antennas,
respectively. From Eqs. (11.55) and (11.56), it can be seen that Xt1 ¼ Xt1
1
; that is,
the signal transmitted from the first antenna is a delayed version of the transmitted
signal from the second transmit antenna. Note that the output Xt2 at time t becomes
the encoder state at time ðt þ 1Þ in this particular example.
Example 11.1 Consider the STTC encoder shown in Fig. 11.11 and determine the
trellis-encoded symbol stream if the two input bit sequences are
" # " #
It1 1 0 1 0 0
¼
It2 0 0 1 1 0

Figure 11.11 shows a structure  of the STTC encoder for this example. The
encoder state at time t is It1 It1 or 2It1 þ It1
1 2 1 2
. The output for the ith transmit
antenna at time t is calculated as
 1 
Xt1 ¼ 2It1 þ It1
2
mod 4

and
 
Xt2 ¼ 2It1 þ It2 mod 4
" # " #
Xt1 0 2 0 3 1
Y¼ ¼
Xt2 2 0 3 1 0

Figure 11.12 shows the corresponding trellis diagram, in which the branch labels
indicate two output symbols, Xt1 and Xt2 .
At time t ¼ 1, we have x1t ¼ 0 and x2t ¼ 2. Therefore, 1 and 1 are transmitted
from first and second antennas, respectively.
At time t ¼ 2, we have x1t ¼ 2 and x2t ¼ 0. Therefore, 1 and 1 are transmitted
from first and second antennas, respectively.
At time t ¼ 3, we have x1t ¼ 0 and x2t ¼ 3. Therefore, 1 and j are transmitted
from first and second antennas, respectively.
380 11 Space–Time Coding

31

03

02 20
10

Fig. 11.12 4-state, 4-PSK encoder’s output for Example 11.1

At time t ¼ 4, we have x1t ¼ 3 and x2t ¼ 1. Therefore, j and j are transmitted


from first and second antennas, respectively.
At time t ¼ 5, we have x1t ¼ 1 and x2t ¼ 0. Therefore, j and 1 are transmitted
from first and second antennas, respectively.

11.5.1.2 8-State 8-PSK Space–Time Trellis Encoder

The 8-state 8-PSK signal constellation and trellis diagram are shown in Figs. 11.13
and 11.14, for a rate of 3 b/s/Hz.

Fig. 11.13 8-PSK signal


constellation
11.5 Space–Time Trellis Coding 381

Fig. 11.14 8-state 8-PSK 0 00 01 02 03 04 05 06 07


trellis diagram
1 50 51 52 53 54 55 56 57

2 20 21 22 23 24 25 26 27

3 70 71 72 73 74 75 76 77

4 40 41 42 43 44 45 46 47

5 10 11 12 13 14 15 16 17

6 60 61 62 63 64 65 66 67

7 30 31 32 33 34 35 36 37

11.5.2 Simulation of BER Performance of 4-State QPSK


STTC Using MATLAB

The following MATLAB Program 11.7 and MATLAB function Programs 11.8
through 11.15 are used to simulate the BER performance of 4-state QPSK STTC.

Program 11.7 for space–time trellis code (STTC) for 4-state QPSK
382 11 Space–Time Coding

Program 11.8 MATLAB function for qpsksttc


11.5 Space–Time Trellis Coding 383

Program 11.9 MATLAB function for sttcenc


384 11 Space–Time Coding

Program 11.10 MATLAB function for symbolmap

Program 11.11 MATLAB function for MaxLike


11.5 Space–Time Trellis Coding 385

Program 11.12 MATLAB function for viterbi


386 11 Space–Time Coding

Program 11.13 MATLAB function for gen2trellis


11.5 Space–Time Trellis Coding 387

Program 11.14 MATLAB function for getbits

Program 11.15 MATLAB function for bit2num

The BER performance obtained by using above programs for 4-state QPSK
STTC is shown in Fig. 11.15.
From Fig. 11.15, it is observed that the STTC with four receiving antennas
outperforms the STTC with one and two receiving antennas.

11.6 MIMO-OFDM Implementation

A MIMO-OFDM system is shown in Fig. 11.16 where OFDM utilizes NT transmit


antennas, NR receive antennas, and Nc subcarriers per antenna. MIMO-OFDM can
be implemented as ST-coded OFDM (ST-OFDM), space–frequency-coded OFDM
(SF-OFDM), and ST-frequency-coded OFDM (STF-OFDM). Let xln ðiÞ be the data
symbol transmitted on the ith subcarrier (frequency bin) from the lth transmit
antenna during the nth OFDM symbol interval. Then, the difference among these
coded systems lies in how xln ðiÞ are generated from the information symbols Sn [9].
388 11 Space–Time Coding

0
10
Nrx =1
Nrx =2
Nrx =4
-1
10
BER

-2
10

-3
10

-4
10
0 2 4 6 8 10 12 14 16 18 20
Eb/No(dB)

Fig. 11.15 BER performance of 4-state QPSK STTC

IFFT FFT
&CP.add &CP .rem

IFFT&CP FFT
.add &CP.rem.

IFFT FFT
&CP.add &CP.rem

IFFT Fading FFT


&CP.add Channel &CP.rem

Fig. 11.16 A MIMO-OFDM system


11.6 MIMO-OFDM Implementation 389

11.6.1 Space–Time-Coded OFDM

The ST coding for a MIMO-OFDM with two transmit antennas is illustrated in


Fig. 11.17. Two information symbols s1 and s2 are sent through subchannel k of
antenna 1 in OFDM blocks n and n þ 1, respectively. Meanwhile, s2 and s1 are sent
through subchannel k of antenna 2 in OFDM blocks n and n þ 1, respectively.

Fig. 11.17 ST coding


OFDM Subchannel

n n+1 OFDM block


Antenna 1

OFDM Subchannel

n n+1 OFDM block


Antenna 2
390 11 Space–Time Coding

11.6.2 Space–Frequency-Coded OFDM

In space–time-coded OFDM, the frequency diversity and the correlation among


different subcarriers are ignored. The strategy that consists of coding across antennas
and different subcarriers of OFDM is called SF-coded OFDM [10]. The schematic
diagram of SF-coded OFDM is shown in Fig. 11.18. The STBC encoder generates
Nc  NT symbols for each OFDM block (time slot). One data burst therefore consists
of Nc vectors of size NT  1 or equivalently one spatial OFDM symbol. The channel
is assumed to be constant over at least one OFDM symbol. The interleaver transmits
the ðl; nÞ symbol on the lth subcarrier of the nth antenna [11].
The SF coding for two transmit antennas can be realized in a straightforward
way by spreading directly the Alamouti code over two subchannels in one OFDM
block. An example of SF coding for two transmit antennas is shown in Fig. 11.19.
The two symbols S1 and S2 are sent from subchannels k and l of the same OFDM
block n at antenna 1, respectively, where k and l denote the indices of two separated
subchannels. Meanwhile, S2 and S1 are sent from subchannels k and l of the same
OFDM block n at antenna 2, respectively [12].

11.6.3 Space–Time–Frequency-Coded OFDM

In STF coding, each xln ðiÞ is a point in 3D as shown in Fig. 11.20; STF code word
can be defined [9] as the collection of transmitted symbols within the

Fig. 11.18 Block diagram of


SFBC-coded OFDM SFBC Encoder

S
IFFT
T Interleaver &CP.add
B
C Interleaver IFFT&CP
E .add
n
c Interleaver IFFT
o &CP.add
d
e
r
IFFT
Interleaver &CP.add
11.6 MIMO-OFDM Implementation 391

Fig. 11.19 SF coding


OFDM Subchannel

n OFDM block
Antenna 1

OFDM Subchannel

n OFDM block
Antenna 2

parallelepiped, spanned by NT transmit antennas, Nx OFDM symbol intervals, and


Nc subcarriers. Thus, one STF code word contains NT Nx Nc transmitted symbols

xln ðiÞ; l ¼ 1; . . .; NT ; n ¼ 0; . . .; Nx1 ; i ¼ 0; . . .; Nc1
392 11 Space–Time Coding

Space(Transmit antenna q)

X(0)

Time

Frequency (sub-carrier)

Fig. 11.20 STF-coded OFDM

11.7 Problems

1. Consider Alamouti STBC with 2 transmit antennas. If the input bit stream is
11011110001001, determine the transmitted symbols from each antenna for
each symbol interval with (i) QPSK modulation and (ii) 16-QAM modulation.
2. A code matrix for STBC is given by
2 3
S1 S2 S3 S4
6 S S1 S4 S3 7
6 2 7
4 S S4 S1 S2 5
3
S4 S3 S2 S1

(i) Check for Orthogonality of the code


(ii) Find the diversity order achieved by this code.
3. Consider a MIMO system with AWGN employing Alamouti STBC with two
transmit and one receiving antennas. Determine the outage probabilities for the
system
(i) When the channel is known at the receiver
(ii) When the channel is known at the transmitter
11.7 Problems 393

4. Consider a 4-state QPSK STTC system. Determine the trellis-encoded symbol


stream if the two input bit sequences are
" # " #
It1 1 0 1 0 0 ...
¼
It2 0 1 1 0 1 ...

5. Consider a 4-state QPSK STTC system. Determine the trellis-encoded symbol


stream if the two input bit sequences are
" # " #
It1 1 1 0 0 1 ...
¼
It2 0 0 1 1 0 ...

6. Consider a STTC 4-PSK system where the transmitted code word is C ¼


220313; and a possible erroneous code word is c ¼ 330122. Determine the
diversity gain of the system.
7. Consider the same data from the Problem 11.5, determine the coding gain

11.8 MATLAB Exercises

1. Write a MATLAB program to simulate the BER performance of 2-transmit 1-


receive antenna Alamouti scheme using MMSE detection.
2. Write a MATLAB program to simulate the BER performance of 2-transmit 2-
receive antenna Alamouti scheme using MMSE detection.
3. Write a MATLAB program to simulate the BER performance of MRC diversity
technique with 4 receiving antennas and compare with the result of problem 2.
4. Write a MATLAB program to simulate the performance of 8-state QPSK STTC.
5. Write a MATLAB program to simulate the BER performance of STBC OFDM.

References

1. Liang, X.-B.: Orthogonal designs with maximum rates. IEEE Trans. Inf. Theor. 49(10), 2468–
2503 (2003)
2. Tarokh, V., Seshadri, N., Calderbank, A.R.: Space-time codes for high data rate wireless
communications: performance criterion and code construction. IEEE Trans. Inf. Theor. 44(2),
744–765 (1998)
3. Tarokh, V., Naguib, A., Seshadri, N., Calderbank, A.R.: Space-time codes for high data rate
wireless communication: performance criteria in the presence of channel estimation errors,
mobility, and multiple paths. IEEE Trans. Commun. 47(2), 199–207 (1999)
4. Tarokh, V., Seshadri, N., Calderbank, A.R.: Space-time codes for wireless communication:
code construction. In: IEEE 47th Vehicular Technology Conference, vol. 2, pp. 637–641.
Phoenix, Arizona, 4–7 May 1997
394 11 Space–Time Coding

5. Tarokh, V., Jafarkhani, H., Calderbank, A.: Space-time block coding for wireless
communications: performance results. IEEE J. Sel. Areas Commun. 17(3), 451–460 (1999)
6. Alamouti, S.M.: A simple transmit diversity technique for wireless communications. IEEE
J. Sel. Areas Commun. 16(8), 1451–1458 (1998)
7. Tarokh, V., Jafarkhani, H., Calderbank, A.: Space-time block codes from orthogonal designs.
IEEE Trans. Inf. Theor. 45(5), 1456–1467 (1999)
8. Chent, Z., Yuantt, J., Vucetict, B.: An improved space-time trellis coded modulation scheme
on slow Rayleigh fading channels. In: IEEE ICC, pp. 1110–1116 (2001)
9. Liu, Z., Xin, Y., Giannakis, G.B.: Space-time-frequency coded OFDM over frequency-
selective fading channels. IEEE Trans. Signal Process. 50(10), 2465–2476 (2002)
10. Lee, K.F., Williams, D.B.: A space-frequency transmitter diversity technique for OFDM
systems. In: IEEE Global Communications Conference, vol. 3, pp. 1473–1477, Nov. 27–Dec.
1, 2000
11. Jafarkhani, H.: Space-Time Coding Theory and Practice. Cambridge University Press,
Cambridge (2005)
12. Zhang, W., Xia, X.-G., Ben Letaief, K.: Space-time/frequency coding for MIMO-OFDM in the
next generation broadband wireless systems. In: IEEE Wireless Communications, pp. 32–43,
June 2007

You might also like