In 1965, Cooley and Turkey were two persons who discussed the FFT (Fast Fourier Transform) for the first time in history. In past years, researchers believed that a discrete Fourier transform can also be calculated and classified as FFT by using the Danielson-Lanczos lemma theorem. By using this theorem, this process is slower than other, as it is slightly tainted in speed due to the power of N (exponent of N) are not 2. Therefore, if the number of points i.e. N is not a power of two, then the transform will only gives you the sets of points matching to the prime factors of N [1]. FFT (Fast Fourier Transform) is a type of algorithm commonly known as discrete Fourier transform algorithm. This algorithm has much significance in the reduction of number of computations that governs for N points from the arithmetic expressions i.e. 2N2 to (2N log2 N), in this expression the ‘N’ is the number of computations and log2 is the logarithm having base 2. In Fourier analysis, if the above function is about to be transferred, not harmonically associated with the sampling frequency, then at this point, the reaction for this FFT behaves just like a sinc function i.e. commonly known as the sampling function, defined as a function used to rise the frequency in the signal processing and propagation, classified as Fourier transforms[4] . However, the other components such as integrated power and aliasing have different variations as the integrated power still gives you the correct values but aliasing can be reduced by using apodization function. This aliasing reduction will be spent for the expansion of the spectral response [1].
Introduction:
Fast Fourier transform (FFT) is an extension of Fourier analysis, which has been proposed few years ago. Earlier in 2011, MIT CSAIL (Computer Science and Artificial Intelligence Laboratory), a group of researchers along with professors namely Piotr Indyk and Dina katabi accompanied by CSAIL graduate students Haitam Hassanieh and
References: [3] (Press et al. 1992, pp. 412-413, Arndt).