This document discusses acoustic echo cancellation using adaptive filters. It introduces acoustic echo and how adaptive filters can be used to cancel echo. The LMS and NLMS algorithms are described as methods to update the adaptive filter weights to minimize the error between the desired and actual signals. The document outlines the implementation of the LMS and NLMS algorithms for echo cancellation, and presents results showing the NLMS algorithm exhibits better convergence properties than LMS while having only slightly higher computational complexity.
This document discusses acoustic echo cancellation using adaptive filters. It introduces acoustic echo and how adaptive filters can be used to cancel echo. The LMS and NLMS algorithms are described as methods to update the adaptive filter weights to minimize the error between the desired and actual signals. The document outlines the implementation of the LMS and NLMS algorithms for echo cancellation, and presents results showing the NLMS algorithm exhibits better convergence properties than LMS while having only slightly higher computational complexity.
This document discusses acoustic echo cancellation using adaptive filters. It introduces acoustic echo and how adaptive filters can be used to cancel echo. The LMS and NLMS algorithms are described as methods to update the adaptive filter weights to minimize the error between the desired and actual signals. The document outlines the implementation of the LMS and NLMS algorithms for echo cancellation, and presents results showing the NLMS algorithm exhibits better convergence properties than LMS while having only slightly higher computational complexity.
This document discusses acoustic echo cancellation using adaptive filters. It introduces acoustic echo and how adaptive filters can be used to cancel echo. The LMS and NLMS algorithms are described as methods to update the adaptive filter weights to minimize the error between the desired and actual signals. The document outlines the implementation of the LMS and NLMS algorithms for echo cancellation, and presents results showing the NLMS algorithm exhibits better convergence properties than LMS while having only slightly higher computational complexity.
AUDIO SIGNAL IS REVERBERATED IN A REAL ENVIRONMENT. ADAPTIVE FILTERS ARE DYNAMIC FILTERS WHICH ITERATIVELY ALTER THEIR CHARACTERISTICS IN ORDER TO ACHIEVE AN OPTIMAL DESIRED OUTPUT. THE AIM OF AN ADAPTIVE FILTER IS TO CALCULATE THE DIFFERENCE BETWEEN THE DESIRED SIGNAL AND THE ADAPTIVE FILTER OUTPUT. IN CASE OF ACOUSTIC ECHO CANCELLATION, THE OPTIMAL OUTPUT OF THE ADAPTIVE FILTER IS EQUAL IN VALUE TO THE UNWANTED ECHOED SIGNAL. IF THE ADAPTIVE FILTER OUTPUT IS EQUAL TO DESIRED SIGNAL, THE ERROR SIGNAL GOES TO ZERO. IN THIS SITUATION, THE ECHOED SIGNAL WOULD BE COMPLETELY CANCELLED AND THE FAR USER WOULD NOT HEAR ANY OF THEIR ECHOED SPEECH RETURNED TO THEM WORKING
THE INPUT SIGNAL IS SENT BY THE USER TO THE PROJECT(REALTIME
RECORDING). ECHO IS ADDED TO THE INPUT SIGNAL THROUGH MATLAB AND THIS OBTAINED SIGNAL IS SENT FOR FURTHER PROCESSING. THEN BY USING SUITABLE ADAPTIVE FILTERS, WE ARE ABLE TO REMOVE THE UNWANTED ECHO SIGNAL. LMS AND NLMS ALGORITHMS ARE BEING USED HERE TO OBTAIN THE DESIRED ECHO CANCELLED SIGNAL. AFTER PROCESSING, THE USER CAN PLAY THE ECHO CANCELLED SIGNAL AND VERIFY THE RESULTS. LEAST MEAN SQUARE ALGORITHM(LMS)
LMS ALGORITHM IS WELL KNOWN AND WIDELY USED DUE TO ITS
COMPUTATIONAL SIMPLICITY. IT IS THIS SIMPLICITY THAT HAS MADE IT THE BENCHMARK AGAINST WHICH ALL OTHER ADAPTIVE ALGORITHMS ARE JUDGED. WITH EACH ITERATION OF THE LMS ALGORITHM, THE FILTER TAP WEIGHTS OF THE ADAPTIVE FILTER ARE UPDATED. THE STEP SIZE PARAMETER(𝜇) IN THIS ALGORITHM, WHICH IS A SMALL POSITIVE CONSTANT, CONTROLS THE INFLUENCE OF THE UPDATING FACTOR. IF THE VALUE OF THE STEP SIZE PARAMETER (𝜇) IS TOO LARGE, THE ADAPTIVE FILTER BECOMES UNSTABLE AND THE OUTPUT DIVERGES. NORMALIZED MEAN SQUARE ALGORITHM (NLMS)
ONE OF THE PRIMARY DISADVANTAGES OF THE LMS ALGORITHM IS HAVING
A FIXED STEP SIZE. THE NORMALISED LEAST MEAN SQUARE ALGORITHM(NLMS) IS AN EXTENSION OF THE LMS ALGORITHM WHICH BYPASSES THIS ISSUE BY CALCULATING MAXIMUM STEP SIZE VALUE. THE STEP SIZE IS THE RECIPROCAL OF TWICE THE DOT PRODUCT OF THE INPUT VECTORS. AS THE STEP SIZE PARAMETER IS CHOSEN BASED ON INPUT VALUES, THE NLMS ALGORITHM TENDS TO SHOW FAR GREATER STABILITY WITH UNKNOWN SIGNALS. THE GOOD CONVERGENCE SPEED AND RELATIVE COMPUTATIONAL SIMPLICITY MAKE NLMS IDEAL FOR ECHO CANCELLATION SYSTEM. IMPLEMENTATION OF LMS ALGORITHM
THE IMPLEMENTATION OF LMS ALGORITHM INVOLVES THREE STAGES FOR
EACH ITERATION. THE OUTPUT OF THE ADAPTIVE FILTER IS OBTAINED BY THE DOT PRODUCT OF THE FILTER TAP WEIGHTS WITH THE ECHOED SIGNAL. 𝑦 𝑛 = 𝑤 𝑇 𝑛 . 𝑥(𝑛) THE VALUE OF THE ERROR IS CALCULATED AS FOLLOWS. 𝑒 𝑛 = 𝑑 𝑛 − 𝑦(𝑛) THE TAP WEIGHTS OF THE FIR FILTER ARE UPDATED IN PREPARATION OF THE NEXT ALGORITHM. 𝑤 𝑛 + 1 = 𝑤 𝑛 + 2𝜇𝑒 𝑛 𝑥(𝑛) IMPLEMENTATION OF NLMS ALGORITHM
THE IMPLEMENTATION OF NLMS ALGORITHM INVOLVES FOUR STAGES FOR
EACH ITERATION. THE OUTPUT OF THE ADAPTIVE FILTER IS OBTAINED BY THE DOT PRODUCT OF THE FILTER TAP WEIGHTS WITH THE ECHOED SIGNAL. 𝑦 𝑛 = 𝑤 𝑇 𝑛 . 𝑥(𝑛) THE VALUE OF THE ERROR IS CALCULATED AS FOLLOWS. 𝑒 𝑛 = 𝑑 𝑛 − 𝑦(𝑛) THE STEP SIZE FOR THE INPUT VECTOR IS CALCULATED. 𝜇 𝑛 = 1/𝑥 𝑇 𝑛 𝑥(𝑛) THE TAP WEIGHTS OF THE FIR FILTER ARE UPDATED IN PREPARATION OF THE NEXT ALGORITHM. 𝑤 𝑛 + 1 = 𝑤 𝑛 + 𝜇𝑒 𝑛 𝑥(𝑛) RESULTS CONCLUSION
THE LMS ALGORITHM HAS 2N + 1 MULTIPLICATIONS WHILE NLMS HAS 3N+1
MULTIPLICATIONS. N FOR OBTAINING THE OUTPUT SIGNAL, ONE FOR SCALAR MULTIPLICATION OF 2𝜇𝑒 𝑛 AND N FOR THE SCALAR BY VECTOR MULTIPLICATION AND AN EXTRA N IN CASE OF NLMS FOR CALCULATION OF 𝜇. BECAUSE OF ITS SIMPLICITY, THE LMS ALGORITHM IS THE MOST POPULAR ADAPTIVE ALGORITHM. HOWEVER IT SUFFERS FROM SLOW AND DATA DEPENDENT CONVERGENCE BEHAVIOUR. NLMS BEING EQUALLY SIMPLE, EXHIBITS A BALANCE BETWEEN SIMPLICITY AND PERFORMANCE THAN THE LMS ALGORITHM. THANKS FOR READING