Everyday DSP for Programmers: The DFT in Use

Last week we built up the DFT (Discrete Fourier Transform) from fundamentals, and while that exercise provides a good way to remember how to calculate the DFT and how the DFT works under the hood, looking at some examples is a good way to get a sense of how the DFT behaves with different types of signals. We'll take a look at the frequency response of a few basic signals by calculating their DFTs, but first, we'll briefly explore a way to calculate the DFT much faster than we did with the direct algorithm from the last post. This optimized DFT is called the FFT (Fast Fourier Transform), and if you can conform your problem to its restrictions, it's the transform that you want to use because, well, it's fast.

Using the FFT

To recap, the direct implementation of the DFT equation is:

fk = ∑[n=0..N-1] s[n](cos(2πn·k/N) - i sin(2πn·k/N))/N
  for k = [0..N/2-1]

And it takes a lot of computations to complete. The inner multiply-sine-cosine operations need to be done N times (for N input samples) for each frequency, and there are N/2 frequencies to calculate. This combination results in a time complexity of O(N2), meaning that as the number of points of the DFT increases, the time to calculate the DFT will increase by the square of the number of points. The question is, how can we do better?

What should become apparent if you stare at this equation for a bit, or if you work out a small DFT by hand, is that a lot of the calculations are duplicated. For example, say you have an 8-point DFT. The calculation for n=2 and k=3 is going to be very similar to the calculation for n=3 and k=2 because the cosine and sine calculations will be done on the same angle. The FFT optimizes the transform calculation by reusing these results and eliminating the duplicate operations. Instead of needing to do O(N2) operations, the FFT only does O(N log N) operations. That means a 1024-point FFT is on the order of 100 times faster than the equivalent DFT, and a 8192-point FFT is over 1000 times faster than the corresponding DFT.

You'll notice that I'm using examples with N as a power of 2. That was not arbitrary. While FFT algorithms do exist for N of any value, the most common algorithm, the Cooley-Tukey FFT algorithm, requires that N be a power of 2. We won't go into the derivation of any FFT algorithms here, but nearly every programming language has a library somewhere with an implementation of one. I found a whole JavaScript DSP library here that includes an FFT algorithm and works pretty well. We're going to focus on using it.

Since the FFT needs to have N be a power of 2, what happens when you don't have a set of samples that is an even power of 2? The simplest thing to do is zero pad the samples until you get to the next power of 2. Because of how much faster the FFT is, computing an FFT with more samples is still faster than computing a DFT with the original number of samples. The zero-padded FFT does change the frequency spectrum a bit, though. Here is an example with a high-frequency sine wave. Click the graph to have the sine wave scroll off the left end of the graph and have the samples on the right replaced with zeros:


Notice how the frequency spectrum starts out with a single spike at the sine wave's frequency, but as more zeros are added, that spike widens into a rounded peak centered at the signal's frequency and ripples extend in either direction. This peak with ripples should look vaguely familiar. It's actually the absolute value of the sinc function. This sinc function arises because each frequency bin of the DFT is a miniature sinc function that has the peak centered on the bin. Every zero between the ripples on either side of the mini-sinc function is centered on the other frequency bins, so the sinc function looks like a single peak. When the time-domain signal is zero-padded, this increases the resolution of the DFT, and these sinc functions spread out over the frequency spectrum so that they become more apparent.

So zero-padding is an option when you need to adjust the number of samples in a signal to get to a power of 2, but the trade-off is that the energy from the frequencies will leak into the surrounding bins. The peaks in the frequency domain will still show the locations of the main frequencies, but magnitudes will be more difficult to calculate. Most of the time, this type of compromise won't be necessary, though. As long as you have an input signal streaming in, all you have to do is break the stream of samples up into FFT-sized chunks for the size FFT you want to do.

With that, let's take a look at a few different signals' frequency response.

Impulse Frequency Response

The impulse function is one of the basic signals we looked at way back in the beginning of this series, and when you calculate the FFT of it, you get a frequency response like this:


The frequency spectrum shows that the impulse function has an equal magnitude of every frequency in the frequency domain proportional to the height of the impulse. I tweaked this graph a bit by scaling up the magnitudes by the size of the FFT so that you can see them. Otherwise, it would look like all of the frequency magnitudes would be zero because they're so small.

What's interesting about this signal is that an impulse in the time domain has a constant response in the frequency domain, while a constant (DC) signal in the time domain has a single frequency response in the frequency domain. That single frequency is the DC frequency. Any pure sine wave at a frequency corresponding to one of the FFT frequency bins will also have a single frequency magnitude that looks like an impulse in the frequency domain. So the impulse function and the sine wave are opposites of each other in both domains. In the time domain the impulse is extremely discontinuous while the sine wave is equally smooth no matter how many times you differentiate it. In the frequency domain the impulse function has infinite frequency content while the sine wave contains a single frequency. These two signals couldn't be more dissimilar, and yet they are both fundamental to DSP algorithms and analysis.

Step Frequency Response

Next up, we'll look at the step function. The step function is closely related to the square wave, so the FFT of the step function follows the infinite Fourier Series approximation of the square wave that was developed at the end of the transforms post and used to introduce the DFT in the last post.

One of the properties of the FFT (and DFT, since they are equivalent) is that whatever samples are used in the time domain of the FFT are converted to the frequency domain as if those samples are one period of a periodic signal. Stated another way, it's as if the time-domain samples are repeated infinitely many times to the left and right of the block of samples that's actually used in the transform. This happens because the number of samples used in the transform are finite, but the sine and cosine waves that are part of the transform are infinite and periodic. That means that when you convert to the frequency domain and then back to the time domain by adding all of the sine waves up, the resulting time domain signal is a periodic version of the original signal extending forever in both directions.

Here is what the frequency response of the step function looks like:


You can click on the graph to pull in the step function to make a square wave, and as you keep clicking, the frequency of the square wave will increase. You can see that the series of frequencies that make up these square waves consists entirely of the odd frequencies from the Fourier Series approximation, and the magnitudes decrease as the frequency goes up. As the frequency of the square wave increases, the frequencies in the FFT increase and spread out because the base frequency for the series approximation is increasing.

Once the frequency of the square wave gets high enough, you'll start to notice smaller magnitude frequencies appear between the main peaks. These are actually frequencies in the infinite Fourier Series that are reflected back onto lower frequencies because they are above the Nyquist Frequency. They are still present in the square wave, but because of the sampling frequency, they appear as lower frequencies than they actually are.

Triangle Wave Frequency Response

For one last frequency response, we'll look at the triangle wave. The triangle wave is a signal whose properties lie somewhere between the square wave and the sine wave. The signal is a linear ramp up until it reaches a peak, and then it's a linear ramp down. The whole thing then repeats to create the periodic signal. It's FFT looks like this:


Again, as you click, the frequency of the triangle wave will increase. The frequency response looks similar to the square wave, but it has less frequency content than the square wave. The magnitudes of the frequencies, other than the base frequency, attenuate much faster than they do with the square wave, so much so that it doesn't appear that there's any reflection of higher frequencies. The reflections are still there, but they are too small to see on this graph.

The fact that there's less frequency content in a triangle wave than there is in a square wave is also apparent from looking at the time-domain signal. The square wave has sharp transitions that have a lot of frequency content, while the triangle wave only has sharp corners at the peaks. The triangle wave is much "closer" to a sine wave than the square wave is.

Hopefully, going through the frequency response of these different signals has helped develop your intuition about how signals will look in the frequency domain. One thing that was consistent throughout these examples was that the frequencies of the signals were all evenly divisible by the sample rate, so the frequencies fit neatly into the bins of the frequency domain. Next week we'll look at what happens when that's not the case, and how we can still accurately measure the main frequencies of the signal after calculating the FFT.


Other DSP posts:
Basic Signals
Transforms
Sampling Theory
Averaging
Step Response of Averaging
Signal Variance
Edge Detection
Signal Envelopes
Frequency Measurement
The Discrete Fourier Transform
The DFT in Use
Spectral Peak Detection
FIR Filters
Frequency Detection
DC and Impulsive Noise Removal

0 Response to "Everyday DSP for Programmers: The DFT in Use"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel