The Fourier Series breaks down a periodic function into the sum of sinusoidal functions. It is the Fourier Transform for periodic functions. To start the analysis of Fourier Series, let's define periodic functions.
A function is periodic, with fundamental period T, if the following is true for all t:
f(t+T)=f(t)
[Equation 1]
In plain English, this means that the a function of time with period T will have the same value in T seconds as it does now, no matter when you observe the function. Note that a periodic function with fundamental period T is also periodic with period 2*T. So the fundamental period is the value of T (greater than zero) that is the smallest possible T for which equation [1] is always true.
As an example, look at the plot of Figure 1:
Figure 1. A periodic square waveform.
The square waveform of Figure 1 has a fundamental period of T.
Let's define a 'Fourier Series' now. A Fourier Series, with period T, is an infinite sum of sinusoidal functions (cosine and sine), each with a frequency that is an integer multiple of 1/T (the inverse of the fundamental period). The Fourier Series also includes a constant, and hence can be written as:
[Equation 2]
The constants a_m, b_n are the coefficients of the Fourier Series. These determine the relative weights for each of the sinusoids. The question now is:
For an arbitrary periodic function f(t) - how closely can we approximate this function with simple sinusoids, each with a period some integer multiple of the fundamental period? That is, for a given periodic function f(t), how closely can the function g(t) approximate f(t)?
It turns out that the answer is one of the coolest results in all of Mathematics. And that is, we can approximate f(t) exactly whenever f(t) is continuous and 'smooth'. In real life, all functions are continuous and smooth, so for the practicing engineer or physicist, all periodic functions can be exactly represented by Fourier