DSP
DSP
DSP
Jobs
Examples
Whiteboard
Net Meeting
Tools
Articles
o Facebook
o Google+
o Twitter
o Linkedin
o YouTube
Home
Q/A
Library
Videos
Tutors
Coding Ground
Store
Search
Z-Transform
Z-Transform - Introduction
Z-Transform - Properties
Z-Transform - Existence
Z-Transform - Inverse
Z-Transform - Solved Examples
Selected Reading
UPSC IAS Exams Notes
Developer's Best Practices
Questions and Answers
Effective Resume Writing
HR Interview Questions
Computer Glossary
Who is Who
Previous Page
Next Page
The process of operation in which the characteristics of a signal (Amplitude, shape, phase,
frequency, etc.) undergoes a change is known as signal processing.
Note − Any unwanted signal interfering with the main signal is termed as noise. So, noise is
also a signal but unwanted.
According to their representation and processing, signals can be classified into various
categories details of which are discussed below.
This type of signal shows continuity both in amplitude and time. These will have values at
each instant of time. Sine and cosine functions are the best example of Continuous time
signal.
The signal shown above is an example of continuous time signal because we can get value of
signal at each instant of time.
Although speech and video signals have the privilege to be represented in both continuous
and discrete time format; under certain circumstances, they are identical. Amplitudes also
show discrete characteristics. Perfect example of this is a digital signal; whose amplitude and
time both are discrete.
The figure above depicts a discrete signal’s discrete amplitude characteristic over a period of
time. Mathematically, these types of signals can be formularized as;
x={x[n]},−∞<n<∞
Where, n is an integer.
is known as unit impulse signal. This signal tends to infinity when t = 0 and tends to zero
when t ≠ 0 such that the area under its curve is always equals to one. The delta function has
zero amplitude everywhere excunit_impulse.jpgept at t = 0.
A=∫∞−∞δ(t)dt=∫∞−∞limϵ→0x(t)dt=limϵ→0∫∞−∞[x(t)dt]=1
Weight or strength of the signal can be written as;
y(t)=Aδ(t)
Area of the weighted impulse signal can be written as −
y(t)=∫∞−∞y(t)dt=∫∞−∞Aδ(t)=A[∫∞−∞δ(t)dt]=A=1=Wigthedimpulse
U(t)=1(whent≥0)and
U(t)=0(whent<0)
It has the property of showing discontinuity at t = 0. At the point of discontinuity, the signal
value is given by the average of signal value. This signal has been taken just before and after
the point of discontinuity (according to Gibb’s Phenomena).
If we add a step signal to another step signal that is time scaled, then the result will be unity.
It is a power type signal and the value of power is 0.5. The RMS (Root mean square) value is
0.707 and its average value is also 0.5
Ramp Signal
Integration of step signal results in a Ramp signal. It is represented by r(t). Ramp signal also
satisfies the condition r(t)=∫t−∞U(t)dt=tU(t)
Parabolic Signal
Integration of Ramp signal leads to parabolic signal. It is represented by p(t). Parabolic signal
also satisfies he condition p(t)=∫t−∞r(t)dt=(t2/2)U(t)
Signum Function
This function is represented as
sgn(t)={1−1fort>0fort<0
It is a power type signal. Its power value and RMS (Root mean square) values, both are 1.
Average value of signum function is zero.
Sinc Function
It is also a function of sine and is written as −
SinC(t)=SinΠtΠT=Sa(Πt)
Sinc(∞)=limt→∞sinΠ∞Π∞=0
(Range of sinπ∞ varies between -1 to +1 but anything divided by infinity is equal to zero)
If sinc(t)=0=>sinΠt=0
⇒Πt=nΠ
⇒t=n(n≠0)
Sinusoidal Signal
A signal, which is continuous in nature is known as continuous signal. General format of a
sinusoidal signal is
x(t)=Asin(ωt+ϕ)
Here,
A = amplitude of the signal
The tendency of this signal is to repeat itself after certain period of time, thus is called
periodic signal. The time period of signal is given as;
T=2πω
The diagrammatic view of sinusoidal signal is shown below.
Rectangular Function
A signal is said to be rectangular function type if it satisfies the following condition −
π(tτ)={1,0,fort≤τ2Otherwise
Δ(tτ)={1−(2|t|τ)0for|t|<τ2for|t|>τ2
This signal is symmetrical about Y-axis. Hence, it is also termed as even signal.
δ(n)={1,0,forn=0Otherwise
U(n)={1,0,forn≥0forn<0
The figure above shows the graphical representation of a discrete step function.
r(n)={n,0,forn≥0forn<0
The figure given above shows the graphical representation of a discrete ramp signal.
Parabolic Function
Discrete unit parabolic function is denoted as p(n) and can be defined as;
p(n)={n22,0,forn≥0forn<0
In terms of unit step function it can be written as;
P(n)=n22U(n)
The figure given above shows the graphical representation of a parabolic sequence.
Sinusoidal Signal
All continuous-time signals are periodic. The discrete-time sinusoidal sequences may or may
not be periodic. They depend on the value of ω. For a discrete time signal to be periodic, the
angular frequency ω must be a rational multiple of 2π.
x(n)=Asin(ωn+ϕ)
Here A,ω and φ have their usual meaning and n is the integer. Time period of the discrete
sinusoidal signal is given by −
N=2πmω
Where, N and m are integers.
x(−t)=x(t)
Time reversal of the signal does not imply any change on amplitude here. For example,
consider the triangular wave shown below.
The triangular signal is an even signal. Since, it is symmetrical about Y-axis. We can say it is
mirror image about Y-axis.
Odd Signal
x(−t)=−x(t)
Here, both the time reversal and amplitude change takes place simultaneously.
In the figure above, we can see a step signal x(t). To test whether it is an odd signal or not,
first we do the time reversal i.e. x(-t) and the result is as shown in the figure. Then we reverse
the amplitude of the resultant signal i.e. –x(-t) and we get the result as shown in figure.
If we compare the first and the third waveform, we can see that they are same, i.e. x(t)= -x(-
t), which satisfies our criteria. Therefore, the above signal is an Odd signal.
Some important results related to even and odd signals are given below.
Some signals cannot be directly classified into even or odd type. These are represented as a
combination of both even and odd signal.
x(t)→xe(t)+x0(t)
Where xe(t) represents the even signal and xo(t) represents the odd signal
xe(t)=[x(t)+x(−t)]2
And
x0(t)=[x(t)−x(−t)]2
Example
x(−n)=−t+t2−t3
Now, according to formula, the even part
xe(t)=x(t)+x(−t)2
=[(t+t2+t3)+(−t+t2−t3)]2
=t2
Similarly, according to formula the odd part is
x0(t)=[x(t)−x(−t)]2
=[(t+t2+t3)−(−t+t2−t3)]2
=t+t3
Periodic signal repeats itself after certain interval of time. We can show this in equation form
as −
x(t)=x(t)±nT
Where, n = an integer (1,2,3……)
Fundamental time period (FTP) is the smallest positive and fixed value of time for which
signal is periodic.
A triangular signal is shown in the figure above of amplitude A. Here, the signal is repeating
after every 1 sec. Therefore, we can say that the signal is periodic and its FTP is 1 sec.
Non-Periodic Signal
Simply, we can say, the signals, which are not periodic are non-periodic in nature. As
obvious, these signals will not repeat themselves after any interval time.
A lossless capacitor is also a perfect example of Energy type signal because when it is
connected to a source it charges up to its optimum level and when the source is removed, it
dissipates that equal amount of energy through a load and makes its average power to zero.
For any finite signal x(t) the energy can be symbolized as E and is written as;
E=∫+∞−∞x2(t)dt
Spectral density of energy type signals gives the amount of energy distributed at various
frequency levels.
Power type Signals
A signal is said to be power type signal, if and only if, normalized average power is finite and
non-zero i.e. (0<p<∞). For power type signal, normalized average power is finite and non-
zero. Almost all the periodic signals are power signals and their average power is finite and
non-zero.
P=limT→∞1/T∫+T/2−T/2x2(t)dt
The following table summarizes the differences of Energy and Power Signals.
P=limT→∞1/T∫+T/2−T/2x2(t)dt
Mathematically,
E=∫+∞−∞x2(t)dt
Solved Examples
Example 1 − Find the Power of a signal z(t)=2cos(3Πt+30o)+4sin(3Π+30o)
Solution − The above two signals are orthogonal to each other because their frequency terms
are identical to each other also they have same phase difference. So, total power will be the
summation of individual powers.
Let z(t)=x(t)+y(t)
Where x(t)=2cos(3Πt+30o)
and y(t)=4sin(3Π+30o)
Power of x(t)=222=2
Power of y(t)=422=8
Therefore, P(z)=p(x)+p(y)=2+8=10
…Ans.
is conjugate or not?
Solution − Here, the real part being t2 is even and odd part (imaginary) being sint
Therefore,
sin(−ωt)=−sinωt
is an odd signal.
x(−n)=x(n)
Here, we can see that x(-1) = x(1), x(-2) = x(2) and x(-n) = x(n). Thus, it is an even signal.
Odd Signal
x(−n)=−x(n)
From the figure, we can see that x(1) = -x(-1), x(2) = -x(2) and x(n) = -x(-n). Hence, it is an
odd as well as anti-symmetric signal.
x(n+N)=x(n)
Here, x(n) signal repeats itself after N period. This can be best understood by considering a
cosine signal −
x(n)=Acos(2πf0n+θ)
x(n+N)=Acos(2πf0(n+N)+θ)=Acos(2πf0n+2πf0N+θ)
=Acos(2πf0n+2πf0N+θ)
For the signal to become periodic, following condition should be satisfied;
x(n+N)=x(n)
⇒Acos(2πf0n+2πf0N+θ)=Acos(2πf0n+θ)
i.e. 2πf0N
is an integral multiple of 2π
2πf0N=2πK
⇒N=Kf0
E=∑n=−∞+∞|x(n)|2
are squared and added, we get the energy signal. Here x(n) is the energy signal and its
energy is finite over time i.e 0<E<∞
Power Signal
Average power of a discrete signal is represented as P. Mathematically, this can be written as;
P=limN→∞12N+1∑n=−N+N|x(n)|2
Here, power is finite i.e. 0<P<∞. However, there are some signals, which belong to neither
energy nor power type signal.
Conjugate Signals
Signals, which satisfies the condition x(t)=x∗(−t)
Let x(t)=a(t)+jb(t)
...eqn. 1
So, x(−t)=a(−t)+jb(−t)
And x∗(−t)=a(−t)−jb(−t)
...eqn. 2
By Condition, x(t)=x∗(−t)
If we compare both the derived equations 1 and 2, we can see that the real part is even,
whereas the imaginary part is odd. This is the condition for a signal to be a conjugate type.
Let x(t)=a(t)+jb(t)
...eqn. 1
So x(−t)=a(−t)+jb(−t)
And x∗(−t)=a(−t)−jb(−t)
−x∗(−t)=−a(−t)+jb(−t)
...eqn. 2
By Condition x(t)=−x∗(−t)
Now, again compare, both the equations just as we did for conjugate signals. Here, we will
find that the real part is odd and the imaginary part is even. This is the condition for a signal
to become conjugate anti-symmetric type.
Example
Any function can be divided into two parts. One part being Conjugate symmetry and other
part being conjugate anti-symmetric. So any signal x(t) can be written as
x(t)=xcs(t)+xcas(t)
Where xcs(t)
xcs(t)=[x(t)+x∗(−t)]2
And
xcas(t)=[x(t)−x∗(−t)]2
, it is called half wave symmetric signal. Here, amplitude reversal and time shifting of the
signal takes place by half time. For half wave symmetric signal, average value will be zero
but this is not the case when the situation is reversed.
Consider a signal x(t) as shown in figure A above. The first step is to time shift the signal and
make it x[t−(T2)]
. So, the new signal is changed as shown in figure B. Next, we reverse the amplitude of the
signal, i.e. make it −x[t−(T2)]
as shown in figure C. Since, this signal repeats itself after half-time shifting and reversal of
amplitude, it is a half wave symmetric signal.
Orthogonal Signal
Two signals x(t) and y(t) are said to be orthogonal if they satisfy the following two
conditions.
Condition 1 − ∫∞−∞x(t)y(t)=0
Condition 2 − ∫x(t)y(t)=0
The signals, which contain odd harmonics (3rd, 5th, 7th ...etc.) and have different frequencies,
are mutually orthogonal to each other.
In trigonometric type signals, sine functions and cosine functions are also orthogonal to each
other; provided, they have same frequency and are in same phase. In the same manner DC
(Direct current signals) and sinusoidal signals are also orthogonal to each other. If x(t) and
y(t) are two orthogonal signals and z(t)=x(t)+y(t)
P(z)=p(x)+p(y)
E(z)=E(x)+E(y)
Example
Here, the signal comprises of a DC signal (3) and one sine function. So, by property this
signal is an orthogonal signal and the two sub-signals in it are mutually orthogonal to each
other.
x(t)→y(t+k)
This K value may be positive or it may be negative. According to the sign of k value, we
have two types of shifting named as Right shifting and Left shifting.
Case 1 (K > 0)
When K is greater than zero, the shifting of the signal takes place towards "left" in the time
domain. Therefore, this type of shifting is known as Left Shifting of the signal.
Example
Case 2 (K < 0)
When K is less than zero the shifting of signal takes place towards right in the time domain.
Therefore, this type of shifting is known as Right shifting.
Example
Amplitude Shifting
Amplitude shifting means shifting of signal in the amplitude domain (around X-axis).
Mathematically, it can be represented as −
x(t)→x(t)+K
This K value may be positive or negative. Accordingly, we have two types of amplitude
shifting which are subsequently discussed below.
Case 1 (K > 0)
When K is greater than zero, the shifting of signal takes place towards up in the x-axis.
Therefore, this type of shifting is known as upward shifting.
Example
x=⎧⎩⎨0,1,0,t<00≤t≤2t>0
Let we have taken K=+1 so new signal can be written as −
y(t)→x(t)+1
So, y(t) can finally be written as;
x(t)=⎧⎩⎨1,2,1,t<00≤t≤2t>0
Case 2 (K < 0)
When K is less than zero shifting of signal takes place towards downward in the X- axis.
Therefore, it is called downward shifting of the signal.
Example
x(t)=⎧⎩⎨0,1,0,t<00≤t≤2t>0
Let we have taken K = -1 so new signal can be written as;
y(t)→x(t)−1
So, y(t) can finally be written as;
y(t)=⎧⎩⎨−1,0,−1,t<00≤t≤2t>0
DSP - Operations on Signals Scaling
Scaling of a signal means, a constant is multiplied with the time or amplitude of the signal.
Time Scaling
If a constant is multiplied to the time axis then it is known as Time scaling. This can be
mathematically represented as;
x(t)→y(t)=x(αt)
or x(tα)
; where α ≠ 0
So the y-axis being same, the x- axis magnitude decreases or increases according to the sign
of the constant (whether positive or negative). Therefore, scaling can also be divided into two
categories as discussed below.
Time Compression
Whenever alpha is greater than zero, the signal’s amplitude gets divided by alpha whereas the
value of the Y-axis remains the same. This is known as Time Compression.
Example
Let us consider a signal x(t), which is shown as in figure below. Let us take the value of alpha
as 2. So, y(t) will be x(2t), which is illustrated in the given figure.
Clearly, we can see from the above figures that the time magnitude in y-axis remains the
same but the amplitude in x-axis reduces from 4 to 2. Therefore, it is a case of Time
Compression.
Time Expansion
When the time is divided by the constant alpha, the Y-axis magnitude of the signal get
multiplied alpha times, keeping X-axis magnitude as it is. Therefore, this is called Time
expansion type signal.
Example
Let us consider a square signal x(t), of magnitude 1. When we time scaled it by a constant 3,
such that x(t)→y(t)→x(t3)
, then the signal’s amplitude gets modified by 3 times which is shown in the figure below.
Amplitude Scaling
Multiplication of a constant with the amplitude of the signal causes amplitude scaling.
Depending upon the sign of the constant, it may be either amplitude scaling or attenuation.
Let us consider a square wave signal x(t) = Π(t/4).
Suppose we define another function y(t) = 2 Π(t/4). In this case, value of y-axis will be
doubled, keeping the time axis value as it is. The is illustrated in the figure given below.
Consider another square wave function defined as z(t) where z(t) = 0.5 Π(t/4). Here,
amplitude of the function z(t) will be half of that of x(t) i.e. time axis remaining same,
amplitude axis will be halved. This is illustrated by the figure given below.
DSP - Operations on Signals Reversal
Whenever the time in a signal gets multiplied by -1, the signal gets reversed. It produces its
mirror image about Y or X-axis. This is known as Reversal of the signal.
Reversal can be classified into two types based on the condition whether the time or the
amplitude of the signal is multiplied by -1.
Time Reversal
Whenever signal’s time is multiplied by -1, it is known as time reversal of the signal. In this
case, the signal produces its mirror image about Y-axis. Mathematically, this can be written
as;
x(t)→y(t)→x(−t)
This can be best understood by the following example.
In the above example, we can clearly see that the signal has been reversed about its Y-axis.
So, it is one kind of time scaling also, but here the scaling quantity is (-1) always.
Amplitude Reversal
Whenever the amplitude of a signal is multiplied by -1, then it is known as amplitude
reversal. In this case, the signal produces its mirror image about X-axis. Mathematically, this
can be written as;
x(t)→y(t)→−x(t)
Consider the following example. Amplitude reversal can be seen clearly.
DSP - Operations on Signals Differentiation
Two very important operations performed on the signals are Differentiation and Integration.
Differentiation
Differentiation of any signal x(t) means slope representation of that signal with respect to
time. Mathematically, it is represented as;
x(t)→dx(t)dt
In the case of OPAMP differentiation, this methodology is very helpful. We can easily
differentiate a signal graphically rather than using the formula. However, the condition is that
the signal must be either rectangular or triangular type, which happens in most cases.
The above table illustrates the condition of the signal after being differentiated. For example,
a ramp signal converts into a step signal after differentiation. Similarly, a unit step signal
becomes an impulse signal.
Example
. When this signal is plotted, it will look like the one on the left side of the figure given
below. Now, our aim is to differentiate the given signal.
To start with, we will start differentiating the given equation. We know that the ramp signal
after differentiation gives unit step signal.
So our resulting signal y(t) can be written as;
y(t)=dx(t)dt
=d4[r(t)−r(t−2)]dt
=4[u(t)−u(t−2)]
Now this signal is plotted finally, which is shown in the right hand side of the above figure.
x(t)→y(t)=∫t−∞x(t)dt
Here also, in most of the cases we can do mathematical integration and find the resulted
signal but direct integration in quick succession is possible for signals which are depicted in
rectangular format graphically. Like differentiation, here also, we will refer a table to get the
result quickly.
Example
. It is shown in Fig-1 below. Clearly, we can see that it is a step signal. Now we will integrate
it. Referring to the table, we know that integration of step signal yields ramp signal.
However, we will calculate it mathematically,
y(t)=∫t−∞x(t)dt
=∫t−∞[u(t)−u(t−3)]dt
=∫t−∞u(t)dt−∫t−∞u(t−3)dt
=r(t)−r(t−3)
The same is plotted as shown in fig-2,
y(t)=x1(t)∗x2(t)
=∫∞−∞x1(p).x2(t−p)dp
Example
Let us do the convolution of a step signal u(t) with its own kind.
y(t)=u(t)∗u(t)
=∫∞−∞[u(p).u[−(p−t)]dp
Now this t can be greater than or less than zero, which are shown in below figures
So, with the above case, the result arises with following possibilities
y(t)={0,∫t01dt,ift<0fort>0
={0,t,ift<0t>0=r(t)
Properties of Convolution
Commutative
It states that order of convolution does not matter, which can be shown mathematically as
x1(t)∗x2(t)=x2(t)∗x1(t)
Associative
It states that order of convolution involving three signals, can be anything. Mathematically, it
can be shown as;
x1(t)∗[x2(t)∗x3(t)]=[x1(t)∗x2(t)]∗x3(t)
Distributive
Two signals can be added first, and then their convolution can be made to the third signal.
This is equivalent to convolution of two signals individually with the third signal and added
finally. Mathematically, this can be written as;
x1(t)∗[x2(t)+x3(t)]=[x1(t)∗x2(t)+x1(t)∗x3(t)]
Area
If a signal is the result of convolution of two signals then the area of the signal is the
multiplication of those individual signals. Mathematically this can be written
If y(t)=x1∗x2(t)
Scaling
If two signals are scaled to some unknown constant “a” and convolution is done then
resultant signal will also be convoluted to same constant “a” and will be divided by that
quantity as shown below.
If, x1(t)∗x2(t)=y(t)
Then, x1(at)∗x2(at)=y(at)a,a≠0
Delay
Suppose a signal y(t) is a result from the convolution of two signals x1(t) and x2(t). If the two
signals are delayed by time t1 and t2 respectively, then the resultant signal y(t) will be
delayed by (t1+t2). Mathematically, it can be written as −
If, x1(t)∗x2(t)=y(t)
Then, x1(t−t1)∗x2(t−t2)=y[t−(t1+t2)]
Solved Examples
Example 1 − Find the convolution of the signals u(t-1) and u(t-2).
Solution − Given signals are u(t-1) and u(t-2). Their convolution can be done as shown
below −
y(t)=u(t−1)∗u(t−2)
y(t)=∫+∞−∞[u(t−1).u(t−2)]dt
=r(t−1)+r(t−2)
=r(t−3)
Example 2 − Find the convolution of two signals given by
x1(n)={3,−2,2}
x2(n)={2,0,0≤n≤4x>elsewhere
Solution −
x2(n) can be decoded as x2(n)={2,2,2,2,2}Originalfirst
Similarly, x2(z)=2+2Z−1+2Z−2+2Z−3+2Z−4
Resultant signal,
X(Z)=X1(Z)X2(z)
={3−2Z−1+2Z−2}×{2+2Z−1+2Z−2+2Z−3+2Z−4}
=6+2Z−1+6Z−2+6Z−3+6Z−4+6Z−5
Taking inverse Z-transformation of the above, we will get the resultant signal as
x(n)={6,2,6,6,6,0,4}
Origin at the first
x(n)={2,1,0,1}
h(n)={1,2,3,1}
Solution −
x(z)=2+2Z−1+2Z−3
And h(n)=1+2Z−1+3Z−2+Z−3
That is Y(Z)=X(Z)×h(Z)
={2+2Z−1+2Z−3}×{1+2Z−1+3Z−2+Z−3}
={2+5Z−1+8Z−2+6Z−3+3Z−4+3Z−5+Z−6}
Taking the inverse Z-transformation, the resultant signal can be written as;
y(n)={2,5,8,6,6,1}Originalfirst
Digital Signal Processing - Static Systems
Some systems have feedback and some do not. Those, which do not have feedback systems,
their output depends only upon the present values of the input. Past value of the data is not
present at that time. These types of systems are known as static systems. It does not depend
upon future values too.
Since these systems do not have any past record, so they do not have any memory also.
Therefore, we say all static systems are memory-less systems. Let us take an example to
understand this concept much better.
Example
Let us verify whether the following systems are static systems or not.
y(t)=x(t)+x(t−1)
y(t)=x(2t)
y(t)=x=sin[x(t)]
a) y(t)=x(t)+x(t−1)
Here, x(t) is the present value. It has no relation with the past values of the time. So, it is a
static system. However, in case of x(t-1), if we put t = 0, it will reduce to x(-1) which is a past
value dependent. So, it is not static. Therefore here y(t) is not a static system.
b) y(t)=x(2t)
If we substitute t = 2, the result will be y(t) = x(4). Again, it is future value dependent. So, it
is also not a static system.
c) y(t)=x=sin[x(t)]
In this expression, we are dealing with sine function. The range of sine function lies within -1
to +1. So, whatever the values we substitute for x(t), we will get in between -1 to +1.
Therefore, we can say it is not dependent upon any past or future values. Hence, it is a static
system.
Examples
Find out whether the following systems are dynamic.
a) y(t)=x(t+1)
In this case if we put t = 1 in the equation, it will be converted to x(2), which is a future
dependent value. Because here we are giving input as 1 but it is showing value for x(2). As it
is a future dependent signal, so clearly it is a dynamic system.
b) y(t)=Real[x(t)]
=[x(t)+x(t)∗]2
In this case, whatever the value we will put it will show that time real value signal. It has no
dependency on future or past values. Therefore, it is not a dynamic system rather it is a static
system.
c) y(t)=Even[x(t)]
=[x(t)+x(−t)]2
Here, if we will substitute t = 1, one signal shows x(1) and another will show x(-1) which is a
past value. Similarly, if we will put t = -1 then one signal will show x(-1) and another will
show x(1) which is a future value. Therefore, clearly it is a case of Dynamic system.
d) y(t)=cos[x(t)]
In this case, as the system is cosine function it has a certain domain of values which lies
between -1 to +1. Therefore, whatever values we will put we will get the result within
specified limit. Therefore, it is a static system
Causal systems are practically or physically realizable system. Let us consider some
examples to understand this much better.
Examples
Let us consider the following signals.
a) y(t)=x(t)
Here, the signal is only dependent on the present values of x. For example if we substitute t =
3, the result will show for that instant of time only. Therefore, as it has no dependence on
future value, we can call it a Causal system.
b) y(t)=x(t−1)
Here, the system depends on past values. For instance if we substitute t = 3, the expression
will reduce to x(2), which is a past value against our input. At no instance, it depends upon
future values. Therefore, this system is also a causal system.
c) y(t)=x(t)+x(t+1)
In this case, the system has two parts. The part x(t), as we have discussed earlier, depends
only upon the present values. So, there is no issue with it. However, if we take the case of
x(t+1), it clearly depends on the future values because if we put t = 1, the expression will
reduce to x(2) which is future value. Therefore, it is not causal.
Examples
Let us take some examples and try to understand this in a better way.
a) y(t)=x(t+1)
We have already discussed this system in causal system too. For any input, it will reduce the
system to its future value. For instance, if we put t = 2, it will reduce to x(3), which is a future
value. Therefore, the system is Non-Causal.
b) y(t)=x(t)+x(t+2)
In this case, x(t) is purely a present value dependent function. We have already discussed that
x(t+2) function is future dependent because for t = 3 it will give values for x(5). Therefore, it
is Non-causal.
c) y(t)=x(t−1)+x(t)
In this system, it depends upon the present and past values of the given input. Whatever
values we substitute, it will never show any future dependency. Clearly, it is not a non-causal
system; rather it is a Causal system.
Examples
Find out whether the following systems are anti-causal.
a) y(t)=x(t)+x(t−1)
The system has two sub-functions. One sub function x(t+1) depends on the future value of
the input but another sub-function x(t) depends only on the present. As the system is
dependent on the present value also in addition to future value, this system is not anti-causal.
b) y(t)=x(t+3)
If we analyze the above system, we can see that the system depends only on the future values
of the system i.e. if we put t = 0, it will reduce to x(3), which is a future value. This system is
a perfect example of anti-causal system.
Law of additivity
Law of homogeneity
Both, the law of homogeneity and the law of additivity are shown in the above figures.
However, there are some other conditions to check whether the system is linear or not.
(a) Trigonometric operators- Sin, Cos, Tan, Cot, Sec, Cosec etc.
Examples
Let us find out whether the following systems are linear.
a) y(t)=x(t)+3
This system is not a linear system because it violates the first condition. If we put input as
zero, making x(t) = 0, then the output is not zero.
b) y(t)=sintx(t)
In this system, if we give input as zero, the output will become zero. Hence, the first
condition is clearly satisfied. Again, there is no non-linear operator that has been applied on
x(t). Hence, second condition is also satisfied. Therefore, the system is a linear system.
c) y(t)=sin(x(t))
In the above system, first condition is satisfied because if we put x(t) = 0, the output will also
be sin(0) = 0. However, the second condition is not satisfied, as there is a non-linear operator
which operates x(t). Hence, the system is not linear.
Conditions
The output should not be zero when input applied is zero.
Any non-linear operator can be applied on the either input or on the output to make
the system non-linear.
Examples
a) y(t)=ex(t)
In the above system, the first condition is satisfied because if we make the input zero, the
output is 1. In addition, exponential non-linear operator is applied to the input. Clearly, it is a
case of Non-Linear system.
b) y(t)=x(t+1)+x(t−1)
The above type of system deals with both past and future values. However, if we will make
its input zero, then none of its values exists. Therefore, we can say if the input is zero, then
the time scaled and time shifted version of input will also be zero, which violates our first
condition. Again, there is no non-linear operator present. Therefore, second condition is also
violated. Clearly, this system is not a non-linear system; rather it is a linear system.
If the above expression, it is first passed through the system and then through the time delay
(as shown in the upper part of the figure); then the output will become x(2T−2t)
. Now, the same expression is passed through a time delay first and then through the system
(as shown in the lower part of the figure). The output will become x(2T−t)
b) y(T)=sin[x(T)]
If the signal is first passed through the system and then through the time delay process, the
output be sinx(T−t)
. Similarly, if the system is passed through the time delay first then through the system then
output will be sinx(T−t)
. We can see clearly that both the outputs are same. Hence, the system is time invariant.
If the above signal is first passed through the system and then through the time delay, the
output will be xcos(T−t)
. If it is passed through the time delay first and then through the system, it will be x(cosT−t)
b) y(T)=cosT.x(T)
If the above expression is first passed through the system and then through the time delay,
then the output will be cos(T−t)x(T−t)
. However, if the expression is passed through the time delay first and then through the
system, the output will be cosT.x(T−t)
. As the outputs are not same, clearly the system is time variant.
Some examples of bounded inputs are functions of sine, cosine, DC, signum and unit step.
Examples
a) y(t)=x(t)+10
Here, for a definite bounded input, we can get definite bounded output i.e. if we put
x(t)=2,y(t)=12
which is bounded in nature. Therefore, the system is stable.
b) y(t)=sin[x(t)]
In the given expression, we know that sine functions have a definite boundary of values,
which lies between -1 to +1. So, whatever values we will substitute at x(t), we will get the
values within our boundary. Therefore, the system is stable.
Examples
a) y(t)=tx(t)
Here, for a finite input, we cannot expect a finite output. For example, if we will put
x(t)=2⇒y(t)=2t
. This is not a finite value because we do not know the value of t. So, it can be ranged from
anywhere. Therefore, this system is not stable. It is an unstable system.
b) y(t)=x(t)sint
We have discussed earlier, that the sine function has a definite range from -1 to +1; but here,
it is present in the denominator. So, in worst case scenario, if we put t = 0 and sine function
becomes zero, then the whole system will tend to infinity. Therefore, this type of system is
not at all stable. Obviously, this is an unstable system.
is linear or non-linear.
Solution − The function represents the conjugate of input. It can be verified by either first
law of homogeneity and law of additivity or by the two rules. However, verifying through
rules is lot easier, so we will go by that.
If the input to the system is zero, the output also tends to zero. Therefore, our first condition
is satisfied. There is no non-linear operator used either at the input nor the output. Therefore,
the system is Linear.
Solution − Clearly, we can see that when time becomes less than or equal to zero the input
becomes zero. So, we can say that at zero input the output is also zero and our first condition
is satisfied.
Again, there is no non-linear operator used at the input nor at the output. Therefore, the
system is Linear.
is stable or not.
Solution − Suppose, we have taken the value of x(t) as 3. Here, sine function has been
multiplied with it and maximum and minimum value of sine function varies between -1 to
+1.
Therefore, the maximum and minimum value of the whole function will also vary between -3
and +3. Thus, the system is stable because here we are getting a bounded input for a bounded
output.
is used for Z-transform to DTFT conversion only for absolutely summable signal.
So, the Z-transform of the discrete time signal x(n) in a power series can be written as −
X(z)=∑n−∞∞x(n)Z−n
The above equation represents a two-sided Z-transform equation.
X(Z)=Z[x(n)]
Or x(n)⟷X(Z)
If it is a continuous time signal, then Z-transforms are not needed because Laplace
transformations are used. However, Discrete time signals can be analyzed through Z-
transforms only.
Region of Convergence
Region of Convergence is the range of complex variable Z in the Z-plane. The Z-
transformation of the signal is finite or convergent. So, ROC represents those set of values of
Z, for which X(Z) has a finite value.
Properties of ROC
Expression of X(Z)
ROC of X(Z)
Example
Let us find the Z-transform and the ROC of a signal given as x(n)={7,3,4,9,5}
X(z)=∑∞n=−∞x(n)Z−n
=∑3n=−1x(n)Z−n
=x(−1)Z+x(0)+x(1)Z−1+x(2)Z−2+x(3)Z−3
=7Z+3+4Z−1+9Z−2+5Z−3
ROC is the entire Z-plane excluding Z = 0, ∞, -∞
Linearity
It states that when two or more individual discrete signals are multiplied by constants, their
respective Z-transforms will also be multiplied by the same constants.
Mathematically,
a1x1(n)+a2x2(n)=a1X1(z)+a2X2(z)
Proof − We know that,
X(Z)=∑n=−∞∞x(n)Z−n
=∑∞n=−∞(a1x1(n)+a2x2(n))Z−n
=a1∑∞n=−∞x1(n)Z−n+a2∑∞n=−∞x2(n)Z−n
=a1X1(z)+a2X2(z)
(Hence Proved)
Time Shifting
Time shifting property depicts how the change in the time domain in the discrete signal will
affect the Z-domain, which can be written as;
x(n−n0)⟷X(Z)Z−n
Or x(n−1)⟷Z−1X(Z)
Proof −
Let y(P)=X(P−K)
Y(z)=∑∞p=−∞y(p)Z−p
=∑∞p=−∞(x(p−k))Z−p
Let s = p-k
=∑∞s=−∞x(s)Z−(s+k)
=∑∞s=−∞x(s)Z−sZ−k
=Z−k[∑∞s=−∞x(m)Z−s]
=Z−kX(Z)
(Hence Proved)
Example
U(n) and U(n-1) can be plotted as follows
∑∞n=−∞[U(n)]Z−n=1
Z-transformation of U(n-1) can be written as;
∑∞n=−∞[U(n−1)]Z−n=Z−1
So here x(n−n0)=Z−n0X(Z)
(Hence Proved)
Time Scaling
Time Scaling property tells us, what will be the Z-domain of the signal when the time is
scaled in its discrete form, which can be written as;
anx(n)⟷X(a−1Z)
Proof −
Let y(p)=apx(p)
Y(P)=∑∞p=−∞y(p)Z−p
=∑∞p=−∞apx(p)Z−p
=∑∞p=−∞x(p)[a−1Z]−p
=X(a−1Z)
(Hence proved)
Solution −
is given by −
∑n=−∞∞(cosωn)Z−n=(Z2−Zcosω)/(Z2−2Zcosω+1)
∑∞n=−∞(ancosωn)Z−n=X(a−1Z)
=[(a−1Z)2−(a−1Zcosωn)]/((a−1Z)2−2(a−1Zcosωn)+1)
=Z(Z−acosω)/(Z2−2azcosω+a2)
Successive Differentiation
Successive Differentiation property shows that Z-transform will take place when we
differentiate the discrete signal in time domain, with respect to time. This is shown as below.
dx(n)dn=(1−Z−1)X(Z)
Proof −
=[x(n)−x(n−1)][n−(n−1)]
=x(n)−X(n−1)
=x(Z)−Z−1x(Z)
=(1−Z−1)x(Z)
(Hence Proved)
Zz[nU(n)]=−ZdZ[U(n)]dz
=−Zd[ZZ−1]dZ
=Z/((Z−1)2
=y(let)
Now, Z[n.y] can be found out by again applying the property,
Z(n,y)=−Zdydz
=−Zd[Z/(Z−1)3]dz
=Z(Z+1)/(Z−1)2
Convolution
This depicts the change in Z-domain of the system when a convolution takes place in the
discrete signal form, which can be written as −
x1(n)∗x2(n)⟷X1(Z).X2(Z)
Proof −
X(Z)=∑∞n=−∞x(n)Z−n
=∑∞n=−∞[∑∞k=−∞x1(k)x2(n−k)]Z−n
=∑∞k=−∞x1(k)[∑∞nx2(n−k)Z−n]
=∑∞k=−∞x1(k)[∑∞n=−∞x2(n−k)Z−(n−k)Z−k]
Let n-k = l, then the above equation cab be written as −
X(Z)=∑∞k=−∞x1(k)[Z−k∑∞l=−∞x2(l)Z−l]
=∑∞k=−∞x1(k)X2(Z)Z−k
=X2(Z)∑∞k=−∞x1(Z)Z−k
=X1(Z).X2(Z)
(Hence Proved)
ROC:ROC⋂ROC2
Example
x1(n)={3,−2,2}
...(eq. 1)
x2(n)={2,0≤4and0elsewhere}
...(eq. 2)
∑∞n=−∞x1(n)Z−n
=3−2Z−1+2Z−2
Z-transformation of the second signal can be written as;
∑∞n=−∞x2(n)Z−n
=2+2Z−1+2Z−2+2Z−3+2Z−4
So, the convolution of the above two signals is given by −
X(Z)=[x1(Z)∗x2(Z)]
=[3−2Z−1+2Z−2]×[2+2Z−1+2Z−2+2Z−3+2Z−4]
=6+2Z−1+6Z−2+6Z−3+.........
Taking the inverse Z-transformation we get,
x(n)={6,2,6,6,6,0,4}
X(n)(atn=0)=limz→∞X(z)
Proof − We know that,
X(Z)=∑∞n=0x(n)Z−n
Expanding the above series, we get;
=X(0)Z0+X(1)Z−1+X(2)Z−2+......
=X(0)×1+X(1)Z−1+X(2)Z−2+......
(Because n>0)
limz→∞X(z)=X(0)
(Hence Proved)
X(∞)=limn→∞X(n)=limz→1[X(Z)(1−Z−1)]
Conditions −
Z+[x(n+1)−x(n)]=limk→∞∑kn=0Z−n[x(n+1)−x(n)]
⇒Z+[x(n+1)]−Z+[x(n)]=limk→∞∑kn=0Z−n[x(n+1)−x(n)]
⇒Z[X(Z)+−x(0)]−X(Z)+=limk→∞∑kn=0Z−n[x(n+1)−x(n)]
Here, we can apply advanced property of one-sided Z-Transformation. So, the above
equation can be re-written as;
Z+[x(n+1)]=Z[X(2)+−x(0)Z0]=Z[X(Z)+−x(0)]
Now putting z = 1 in the above equation, we can expand the above equation −
limk→∞[x(1)−x(0)+x(6)−x(1)+x(3)−x(2)+.........+x(x+1)−x(k)]
This can be formulated as;
X(∞)=limn→∞X(n)=limz→1[X(Z)(1−Z−1)]
(Hence Proved)
Example
Let us find the Initial and Final value of x(n) whose signal is given by
X(Z)=2+3Z−1+4Z−2
Solution − Let us first, find the initial value of the signal by applying the theorem
x(0)=limz→∞X(Z)
=limz→∞[2+3Z−1+4Z−2]
=2+(3∞)+(4∞)=2
Now let us find the Final value of signal applying the theorem
x(∞)=limz→∞[(1−Z−1)X(Z)]
=limz→∞[(1−Z−1)(2+3Z−1+4Z−2)]
=limz→∞[2+Z−1+Z−2−4Z−3]
=2+1+1−4=0
Some other properties of Z-transform are listed below −
Differentiation in Frequency
It gives the change in Z-domain of the signal, when its discrete signal is differentiated with
respect to time.
nx(n)⟷−ZdX(z)dz
Its ROC can be written as;
r2<Mod(Z)<r1
Example
Let us find the value of x(n) through Differentiation in frequency, whose discrete signal in Z-
domain is given by x(n)⟷X(Z)=log(1+aZ−1)
nx(n)⟷−Zdx(Z)dz
=−Z[−aZ−21+aZ−1]
=(aZ−1)/(1+aZ−1)
=1−1/(1+aZ−1)
nx(n)=δ(n)−(−a)nu(n)
⇒x(n)=1/n[δ(n)−(−a)nu(n)]
Multiplication in Time
It gives the change in Z-domain of the signal when multiplication takes place at discrete
signal level.
x1(n).x2(n)⟷(12Πj)[X1(Z)∗X2(Z)]
Conjugation in Time
This depicts the representation of conjugated discrete signal in Z-domain.
X∗(n)⟷X∗(Z∗)
Mod(X(Z))<∞
=Mod(∑x(n)Z−n)<∞
=∑Mod(x(n)Z−n)<∞
=∑Mod[x(n)(rejw)−n]<0
=∑Mod[x(n)r−n]Mod[e−jwn]<∞
=∑∞n=−∞Mod[x(n)r−n]<∞
The above equation shows the condition for existence of Z-transform.
∑n=−∞∞Mod(x(n)<∞
Example 1
Let us try to find out the Z-transform of the signal, which is given as
x(n)=−(−0.5)−nu(−n)+3nu(n)
=−(−2)nu(n)+3nu(n)
For 3nu(n)
Hence, here Z-transform of the signal will not exist because there is no common region.
Example 2
Let us try to find out the Z-transform of the signal given by
x(n)=−2nu(−n−1)+(0.5)nu(n)
Solution − Here, for −2nu(−n−1)
X(Z)={11−2Z−1}+{1(1−0.5Z)−1}
Example 3
Let us try to find out the Z-transform of the signal, which is given as x(n)=2r(n)
Solution − r(n) is the ramp signal. So the signal can be written as;
x(n)=2nu(n){1,n<0(u(n)=0)and2n,n≥0(u(n)=1)}
=u(−n−1)+2nu(n)
H(Z)=∑n=0∞h(n)Z−n
Expanding the above equation,
H(Z)=h(0)+h(1)Z−1+h(2)Z−2+.........
=N(Z)/D(Z)
For causal systems, expansion of Transfer Function does not include positive powers of Z.
For causal system, order of numerator cannot exceed order of denominator. This can be
written as-
limz→∞H(Z)=h(0)=0orFinite
For stability of causal system, poles of Transfer function should be inside the unit circle in Z-
plane.
. For Anti causal system, poles of transfer function should lie outside unit circle in Z-plane.
For anti-causal system, ROC will be inside the circle in Z-plane.
x(n)=Z−1X(Z)
where x(n) is the signal in time domain and X(Z) is the signal in frequency domain.
If we want to represent the above equation in integral format then we can write it as
x(n)=(12Πj)∮X(Z)Z−1dz
Here, the integral is over a closed path C. This path is within the ROC of the x(z) and it does
contain the origin.
x(z)=N(Z)/D(Z)
Now, if we go on dividing the numerator by denominator, then we will get a series as shown
below
X(z)=x(0)+x(1)Z−1+x(2)Z−2+.........
The above sequence represents the series of inverse Z-transform of the given signal (for n≥0)
and the above system is causal.
x(z)=x(−1)Z1+x(−2)Z2+x(−3)Z3+.........
x(z)=b0+b1Z−1+b2Z−2+.........+bmZ−m)/(a0+a1Z−1+a2Z−2+.........+anZ−N)
The above one is improper when m<n and an≠0
If the ratio is not proper (i.e. Improper), then we have to convert it to the proper form to solve
it.
x(n)=∑allpolesX(z)residuesof[x(z)Zn−1]
is
Residues=1(m−1)!limZ→β{dm−1dZm−1{(z−β)mX(z)Zn−1}
Solution − Taking Z-transform on both the sides of the above equation, we get
S(z)Z2−3S(z)Z1+2S(z)=1
⇒S(z){Z2−3Z+2}=1
⇒S(z)=1{z2−3z+2}=1(z−2)(z−1)=α1z−2+α2z−1
⇒S(z)=1z−2−1z−1
Taking the inverse Z-transform of the above equation, we get
S(n)=Z−1[1Z−2]−Z−1[1Z−1]
=2n−1−1n−1=−1+2n−1
Example 2
Find the system function H(z) and unit sample response h(n) of the system whose difference
equation is described as under
y(n)=12y(n−1)+2x(n)
where, y(n) and x(n) are the output and input of the system, respectively.
y(z)=12Z−1Y(Z)+2X(z)
=Y(Z)[1−12Z−1]=2X(Z)
=H(Z)=Y(Z)X(Z)=2[1−12Z−1]
Example 3
Determine Y(z),n≥0 in the following case −
y(n)+12y(n−1)−14y(n−2)=0giveny(−1)=y(−2)=1
Solution − Applying the Z-transform to the above equation, we get
Y(Z)+12[Z−1Y(Z)+Y(−1)]−14[Z−2Y(Z)+Z−1Y(−1)+4(−2)]=0
⇒Y(Z)+12ZY(Z)+12−14Z2Y(Z)−14Z−14=0
⇒Y(Z)[1+12Z−14Z2]=14Z−12
⇒Y(Z)[4Z2+2Z−14Z2]=1−2Z4Z
⇒Y(Z)=Z(1−2Z)4Z2+2Z−1
X (jω) in continuous F.T, is a continuous function of x(n). However, DFT deals with
representing x(n) with samples of its spectrum X(ω). Hence, this mathematical tool carries
much importance computationally in convenient representation. Both, periodic and non-
periodic sequences can be processed through this tool. The periodic sequences need to be
sampled by extending the period to infinity.
Similarly, periodic sequences can fit to this tool by extending the period N to infinity.
radian.
X(2πNk)=∑∞n=−∞x(n)e−j2πnk/N,
...eq(2)
where k=0,1,……N-1
X(2πNk)=∑n=0N−1[∑l=−∞∞x(n−Nl)]e−j2πnk/N
...eq(3)
∑∞l=−∞x(n−Nl)=xp(n)=aperiodicfunctionofperiodNanditsfourierseries=∑N−
1k=0Ckej2πnk/N
Ck=1N∑N−1n=0xp(n)e−j2πnk/N
k=0,1,…,N-1...eq(4)
NCk=X(2πNk)
k=0,1,…,N-1...eq(5)
NCk=X(2πNk)=X(ejw)=∑n=−∞∞xp(n)e−j2πnk/N
...eq(6)
Where n=0,1,…,N-1
can be extracted from xp(n) only, if there is no aliasing in the time domain. N≥L
N = period of xp(n)
L= period of x(n)
x(n)={xp(n),0,0≤n≤N−1Otherwise
The mapping is achieved in this manner.
Properties of DFT
Linearity
It states that the DFT of a combination of signals is equal to the sum of DFT of individual
signals. Let us take two signals x1(n) and x2(n), whose DFT s are X1(ω) and X2(ω)
respectively. So, if
x1(n)→X1(ω)
andx2(n)→X2(ω)
Then ax1(n)+bx2(n)→aX1(ω)+bX2(ω)
Symmetry
The symmetry properties of DFT can be derived in a similar way as we derived DTFT
symmetry properties. We know that DFT of sequence x(n) is denoted by X(K). Now, if x(n)
and X(K) are complex valued sequence, then it can be represented as under
x(n)=xR(n)+jx1(n),0≤n≤N−1
And X(K)=XR(K)+jX1(K),0≤K≤N−1
Duality Property
Let us consider a signal x(n), whose DFT is given as X(K). Let the finite duration sequence
be X(N). Then according to duality theorem,
If, x(n)⟷X(K)
Then, X(N)⟷Nx[((−k))N]
So, by using this theorem if we know DFT, we can easily find the finite duration sequence.
Suppose, there is a signal x(n), whose DFT is also known to us as X(K). Now, if the complex
conjugate of the signal is given as x*(n), then we can easily find the DFT without doing much
calculation by using the theorem shown below.
If, x(n)⟷X(K)
Then, x∗(n)⟷X∗((K))N=X∗(N−K)
The multiplication of the sequence x(n) with the complex exponential sequence ej2Πkn/N
is equivalent to the circular shift of the DFT by L units in frequency. This is the dual to the
circular time shifting property.
If, x(n)⟷X(K)
Then, x(n)ej2ΠKn/N⟷X((K−L))N
If there are two signal x1(n) and x2(n) and their respective DFTs are X1(k) and X2(K), then
multiplication of signals in time sequence corresponds to circular convolution of their DFTs.
If, x1(n)⟷X1(K)&x2(n)⟷X2(K)
Then, x1(n)×x2(n)⟷X1(K)©X2(K)
Parseval’s Theorem
If, x(n)⟷X(K)&y(n)⟷Y(K)
Then, ∑N−1n=0x(n)y∗(n)=1N∑N−1k=0X(K)Y∗(K)
DSP - DFT Time Frequency Transform
We know that when ω=2πK/N
Therefore,
NCk=X(2πNk)=X(ejω)=∑n=−∞∞x(n)e−j2πnkN=∑n=−∞∞x(n)e−jωn
Discrete Time Fourier Transform (DTFT)
Where, X(ejω)
Now,
xp(n)=∑N−1k=0NCkej2πnk/N
… From Fourier series
xp(n)=12π∑N−1k=0NCkej2πnk/N×2πN
x(n)=12π∫2πn=0X(ejω)ejωndω
…eq(2)
Symbolically,
x(n)⟺x(ejω)
(The Fourier Transform pair)
Necessary and sufficient condition for existence of Discrete Time Fourier Transform for a
non-periodic sequence x(n) is absolute summable.
i.e.∑∞n=−∞|x(n)|<∞
Properties of DTFT
Linearity : a1x1(n)+a2x2(n)⇔a1X1(ejω)+a2X2(ejω)
Convolution − x1(n)∗x2(n)⇔X1(ejω)×X2(ejω)
Multiplication − x1(n)×x2(n)⇔X1(ejω)∗X2(ejω)
Co-relation − yx1×x2(l)⇔X1(ejω)×X2(ejω)
Symmetry −x∗(n)⇔X∗(e−jω)
x∗(−n)⇔X∗(ejω)
;
Real[x(n)]⇔Xeven(ejω)
;
Imag[x(n)]⇔Xodd(ejω)
;
xeven(n)⇔Real[x(ejω)]
;
xodd(n)⇔Imag[x(ejω)]
;
Parseval’s theorem − ∑∞−∞|x1(n)|2=12π∫π−π|X1(ejω)|2dω
Earlier, we studied sampling in frequency domain. With that basic knowledge, we sample
X(ejω)
in frequency domain, so that a convenient digital analysis can be done from that sampled
data. Hence, DFT is sampled in both time and frequency domain. With the assumption
x(n)=xp(n)
Hence, DFT is given by −
X(k)=DFT[x(n)]=X(2πNk)=∑n=0N−1x(n)e−j2πnkN
,k=0,1,….,N−1…eq(3)
X(n)=IDFT[X(k)]=1N∑N−1k=0X(k)ej2πnkN
,n=0,1,….,N−1…eq(4)
∴x(n)⇔X(k)
Twiddle Factor
It is denoted as WN
WrN=Wr±NN=Wr±2NN=...
It is function of r and period N.
Consider N = 8, r = 0,1,2,3,….14,15,16,….
⟺W08=W88=W168=...=...=W328=...=1=1∠0
W18=W98=W178=...=...=W338=...=12√=j12√=1∠−π4
Linear Transformation
Let us understand Linear Transformation −
We know that,
DFT(k)=DFT[x(n)]=X(2πNk)=∑N−1n=0x(n).W−nkn;k=0,1,….,N−1
x(n)=IDFT[X(k)]=1N∑N−1k=0X(k).W−nkN;n=0,1,….,N−1
Note − Computation of DFT can be performed with N2 complex multiplication and N(N-1)
complex addition.
xN=⎡⎣⎢⎢⎢⎢⎢⎢x(0)x(1)..x(N−1)⎤⎦⎥⎥⎥⎥⎥⎥NpointvectorofsignalxN
XN=⎡⎣⎢⎢⎢⎢⎢⎢X(0)X(1)..X(N−1)⎤⎦⎥⎥⎥⎥⎥⎥NpointvectorofsignalXN
⎡⎣⎢⎢⎢⎢⎢⎢⎢⎢11..11WNW2NWN−1N1W2NW4NW2(N−1)N........................1WN−1NW2(N−
1)NW(N−1)(N−1)N⎤⎦⎥⎥⎥⎥⎥⎥⎥⎥
WN⟼
Matrix of linear transformation
Now,xN=W−1NXN
IDFT in Matrix form is given by
xN=1NW∗NXN
and WN×W∗N=N[I]N×N
Therefore, WN
. Now, if we shift the sequence, which is a periodic sequence by k units to the right, another
periodic sequence is obtained. This is known as Circular shift and this is given by,
x′p(n)=xp(n−k)=∑l=−∞∞x(n−k−Nl)
The new finite sequence can be represented as
x′p(n)={x′p(n),00≤n≤N−1Otherwise
Example − Let x(n)= {1,2,4,3}, N = 4,
x′p(n)=x(n−k,moduloN)≡x((n−k))N;ex−ifk=2i.e2unitrightshiftandN=4,
Assumed clockwise direction as positive direction.
We got, x′(n)=x((n−2))4
x′(0)=x((−2))4=x(2)=4
x′(1)=x((−1))4=x(3)=3
x′(2)=x((−2))4=x(0)=1
x′(3)=x((1))4=x(1)=2
Conclusion − Circular shift of N-point sequence is equivalent to a linear shift of its periodic
extension and vice versa.
i.e.xp(n)=xp(−n)=xp(N−n)
i.e.xp(n)=−xp(−n)=−xp(N−n)
Conjugate odd − xp(n)=−x∗p(N−n)
Now, xp(n)=xpe+xpo(n)
, where,
xpe(n)=12[xp(n)+x∗p(N−n)]
xpo(n)=12[xp(n)−x∗p(N−n)]
XR(k)=XR(N−k)
Xl(k)=−Xl(N−k)
∠X(k)=−∠X(N−K)
Time reversal − reversing sample about the 0th sample. This is given as;
x((−n))N=x(N−n),0≤n≤N−1
Time reversal is plotting samples of sequence, in clockwise direction i.e. assumed negative
direction.
x∗(n)⟷X∗((−k))N=X∗(N−k)and
x∗((−n))N=x∗(N−n)⟷X∗(−k)
Multiplication of two sequence −
x1(n)⟷X1(k)andx2(n)⟷X2(k)
∴x1(n)x2(n)⟷X1(k)ⓃX2(k)
x1(k)Ⓝx2(k)=∑N−1k=0x1(n).x2((m−n))n,m=0,1,2,....,N−1
x1(k)Ⓝx2(k)⟷X1(k).X2(k)
and y(n)⟷Y(k) , then there exists a cross correlation sequence denoted as Y¯xy such that
Y¯xy(l)=∑N−1n=0x(n)y∗((n−l))N=X(k).Y∗(k)
and y(n)⟷Y(k)
∑n=0N−1x(n)y∗(n)=1N∑n=0N−1X(k).Y∗(k)
X1(K)=∑n=0N−1x1(n)ej2ΠknNk=0,1,2...N−1
X2(K)=∑n=0N−1x2(n)ej2ΠknNk=0,1,2...N−1
Now, we will try to find the DFT of another sequence x3(n), which is given as X3(K)
X3(K)=X1(K)×X2(K)
By taking the IDFT of the above we get
x3(n)=1N∑n=0N−1X3(K)ej2ΠknN
After solving the above equation, finally, we get
x3(n)=∑m=0N−1x1(m)x2[((n−m))N]m=0,1,2...N−1
Let x1(n)
and x2(n) be two given sequences. The steps followed for circular convolution of x1(n) and
x2(n)
are
on the circumference of the outer circle (maintaining equal distance successive points) in
anti-clockwise direction.
, plot N samples of x2(n) in clockwise direction on the inner circle, starting sample placed at
the same point as 0th sample of x1(n)
Multiply corresponding samples on the two circles and add them to get output.
Rotate the inner circle anti-clockwise with one sample at a time.
and x2(n)
in matrix form.
One of the given sequences is repeated via circular shift of one sample at a time to
form a N X N matrix.
The other sequence is represented as column matrix.
The multiplication of two matrices give the result of circular convolution.
Thus,Y(ω)=X(ω).H(ω)⟷y(n)
are continuous function of ω, which is not fruitful for digital computation on computers.
However, DFT provides sampled version of these waveforms to solve the purpose.
The advantage is that, having knowledge of faster DFT techniques likes of FFT, a
computationally higher efficient algorithm can be developed for digital computer
computation in comparison with time domain approach.
x(n)y(n)
output=y(n)=∑k=0M−1h(k).x(n−k)
From the convolution analysis, it is clear that, the duration of y(n) is L+M−1.
In frequency domain,
Y(ω)=X(ω).H(ω)
Now, Y(ω)
DFTsize=N≥L+M−1
With ω=2πNk
Y(ω)=X(k).H(k)
, where k=0,1,….,N-1
Where, X(k) and H(k) are N-point DFTs of x(n) and h(n) respectively. x(n)&h(n)
are padded with zeros up to the length N. It will not distort the continuous spectra X(ω) and
H(ω). Since N≥L+M−1
, N-point DFT of output sequence y(n) is sufficient to represent y(n) in frequency domain and
these facts infer that the multiplication of N-point DFTs of X(k) and H(k), followed by the
computation of N-point IDFT must yield y(n).
This implies, N-point circular convolution of x(n) and H(n) with zero padding, equals to
linear convolution of x(n) and h(n).
The successive blocks are then processed one at a time and the results are combined to
produce the net result.
As the convolution is performed by dividing the long input sequence into different fixed size
sections, it is called sectioned convolution. A long input sequence is segmented to fixed size
blocks, prior to FIR filter processing.
Overlap-save method
Overlap-add method
Let the length of input data block = N = L+M-1. Therefore, DFT and IDFT length = N. Each
data block carries M-1 data points of previous block followed by L new data points to form a
data sequence of length N = L+M-1.
First M-1 points are corrupted due to aliasing and hence, they are discarded because
the data record is of length N.
The last L points are exactly same as a result of convolution, so
To avoid aliasing, the last M-1 elements of each data record are saved and these
points carry forward to the subsequent record and become 1st M-1 elements.
Result of IDFT, where first M-1 Points are avoided, to nullify aliasing and remaining
L points constitute desired result as that of a linear convolution.
Let the input data block size be L. Therefore, the size of DFT and IDFT: N = L+M-1
Each data block is appended with M-1 zeros to the last.
Compute N-point DFT.
y(n) = {y1(0), y1(1), y1(2), ... .., y1(L-1), y1(L)+y2(0), y1(L+1)+y2(1), ... ... .., y1(N-
1)+y2(M-1),y2(M), ... ... ... ... ... }
Suppose, we try to find out an orthogonal transformation which has N×N structure that
expressed a real sequence x(n) as a linear combination of cosine sequence. We already know
that −
X(K)=∑n=0N−1x(n)cos2ΠknN0≤k≤N−1
And x(n)=1N∑N−1k=0x(k)cos2ΠknN0≤k≤N−1
DCT is, basically, used in image and speech processing. It is also used in compression of
images and speech signals.
DFT[s(n)]=S(k)=∑2N−1n=0s(n)Wnk2N,where0≤k≤2N−1
S(k)=∑n=0N−1x(n)Wnk2N+∑n=N2N−1x(2N−n−1)Wnk2N;where0≤k≤2N−1
⇒S(k)=W−k/22N+∑N−1n=0x(n)[Wnk2NWk/22N+W−nk2NW−k/22N];where0≤k≤2N−1
⇒S(k)=Wk22N∑N−1n=0x(n)cos[πN(n+12)k];where0≤k≤2N−1
DCT is defined by,
V(k)=2∑N−1n=0x(n)cos[π2(n+12)k]where0≤k≤N−1
⇒V(k)=Wk22NS(k)orS(k)=Wk22NV(k),where0≤k≤N−1
⇒V(k)=2R[Wk22N∑N−1n=0x(n)Wnk2N],where0≤k≤N−1
Solution − ∑−∞∞|x1(n)|2=12π∫π−π|X1(ejω)|2dω
L.H.S ∑−∞∞|x1(n)|2
=∑−∞∞x(n)x∗(n)
=∑−∞∞(14)2nu(n)=11−116=1615
R.H.S. X(ejω)=11−14e−jω=11−0.25cosω+j0.25sinω
⟺X∗(ejω)=11−0.25cosω−j0.25sinω
Calculating, X(ejω).X∗(ejω)
=1(1−0.25cosω)2+(0.25sinω)2=11.0625−0.5cosω
12π∫π−π11.0625−0.5cosωdω
12π∫π−π11.0625−0.5cosωdω=16/15
X(K)=∑n=0N−1x(n)ej2ΠknN
=∑n=0N−13δ(n)ej2ΠknN
=3δ(0)×e0=1
So,x(k)=3,0≤k≤N−1
… Ans.
Example 3
Compute the N-point DFT of x(n)=7(n−n0)
X(K)=∑n=0N−1x(n)ej2ΠknN
Substituting the value of x(n),
∑n=0N−17δ(n−n0)e−j2ΠknN
=e−kj14Πkn0/N
… Ans
The main advantage of having FFT is that through it, we can design the FIR filters.
Mathematically, the FFT can be written as follows;
x[K]=∑n=0N−1x[n]WnkN
Let us take an example to understand it better. We have considered eight points named from
x0tox7
. We will choose the even terms in one group and the odd terms in the other. Diagrammatic
view of the above said has been shown below −
Here, points x0, x2, x4 and x6 have been grouped into one category and similarly, points x1, x3,
x5 and x7 has been put into another category. Now, we can further make them in a group of
two and can proceed with the computation. Now, let us see how these breaking into further
two is helping in computation.
x[k]=∑r=0N2−1x[2r]W2rkN+∑r=0N2−1x[2r+1]W(2r+1)kN
=∑N2−1r=0x[2r]WrkN/2+∑N2−1r=0x[2r+1]WrkN/2×WkN
=G[k]+H[k]×WkN
Initially, we took an eight-point sequence, but later we broke that one into two parts G[k] and
H[k]. G[k] stands for the even part whereas H[k] stands for the odd part. If we want to realize
it through a diagram, then it can be shown as below −
From the above figure, we can see that
W48=−1
W58=−W18
W68=−W28
W78=−W38
Similarly, the final values can be written as follows −
G[0]−H[0]=x[4]
G[1]−W18H[1]=x[5]
G[2]−W28H[2]=x[6]
G[1]−W38H[3]=x[7]
The above one is a periodic series. The disadvantage of this system is that K cannot be
broken beyond 4 point. Now Let us break down the above into further. We will get the
structures something like this
Example
x[k]=∑n=0N−1x[n]Wn−kN
Now let us make one group of sequence number 0 to 3 and another group of sequence 4 to 7.
Now, mathematically this can be shown as;
∑n=0N2−1x[n]WnkN+∑n=N/2N−1x[n]WnkN
Let us replace n by r, where r = 0, 1 , 2….(N/2-1). Mathematically,
∑n=0N2−1x[r]WnrN/2
We take the first four points (x[0], x[1], x[2], x[3]) initially, and try to represent them
mathematically as follows −
∑3n=0x[n]Wnk8+∑3n=0x[n+4]W(n+4)k8
={∑3n=0x[n]+∑3n=0x[n+4]W(4)k8}×Wnk8
now X[0]=∑3n=0(X[n]+X[n+4])
X[1]=∑3n=0(X[n]+X[n+4])Wnk8
=[X[0]−X[4]+(X[1]−X[5])W18+(X[2]−X[6])W28+(X[3]−X[7])W38
We can further break it into two more parts, which means instead of breaking them as 4-point
sequence, we can break them into 2-point sequence.
In the above example, we have taken limits between -π to +π. We have divided it into 256
parts. The points can be represented as H(0), H(1),….up to H(256). Here, we apply IDFT
algorithm and this will give us linear phase characteristics.
Sometimes, we may be interested in some particular order of filter. Let us say we want to
realize the above given design through 9th order filter. So, we take filter values as h0, h1,
h2….h9. Mathematically, it can be shown as below
H(ejω)=h0+h1e−jω+h2e−2jω+.....+h9e−9jω
Where there are large number of dislocations, we take maximum points.
For example, in the above figure, there is a sudden drop of slopping between the points B and
C. So, we try to take more discrete values at this point, but there is a constant slope between
point C and D. There we take less number of discrete values.
H(ejω1)=h0+h1e−jω1+h2e−2jω1+.....+h9e−9jω1
H(ejω2)=h0+h1e−jω2+h2e−2jω2+.....+h9e−9jω2
Similarly,
(ejω1000)=h0+h1eH−jω1000h2e−2jω1000+.....+h9+e−9jω1000
Representing the above equation in matrix form, we have −
⎡⎣⎢⎢⎢⎢H(ejω1)..H(ejω1000)⎤⎦⎥⎥⎥⎥=⎡⎣⎢⎢⎢⎢e−jω1..e−jω1000......e−j9ω1..ej9ω1000⎤⎦⎥⎥⎥⎥⎡⎣⎢⎢⎢h0.
.h9⎤⎦⎥⎥⎥
Let us take the 1000×1 matrix as B, 1000×9 matrix as A and 9×1 matrix as h^
h^=[ATA]−1ATB
=[A∗TA]−1A∗TB
where A* represents the complex conjugate of the matrix A.