# Optimal padding for convolution with the FFT

The Fast Fourier Transform (FFT) is fundamental to digital signal processing as it enables the (circular) convolution of two signals of length to be computed in time. Different algorithms must be used for different in order to achieve this asymptotic bound (see this paper on the design of FFTW for more details). Non-periodic convolutions can be computed by simply taking a subset of the output elements. However, in this scenario, it might be possible to use a faster algorithm by considering larger circular convolutions than necessary, padding the signals with arbitrary values. This article will investigate how to find a somewhat-optimal size.

### Cooley-Tukey algorithm

This is perhaps the most well-known algorithm for computing the FFT. It notes that if the signal length can be factored , then the transform of length can be computed using transforms of length and transforms of length . The cost of computing the FFT is thus

Let the prime factorisation of be . Then the cost of the transform is

We can thus see that if with some small constant, then is . However, if is prime then the complexity will still be , and if has large prime factors then the algorithm will be several times slower than with small.

### Next power of a few small primes

A simple heuristic is therefore to round the length up to the next power of a few small primes. In the following experiment, we compare computing the FFT of size to the next power of 2 as well as to the next number of the form .

From this we can see that the simple heuristic of rounding up to the next power of 2 gives a big improvement for some sizes, but can be up to twice as slow. Using the next power of 2, 3, 5 and 7 is often but not always better than the next power of 2. It should be possible to achieve the staircase-like curve which lower-bounds this set of points. However, we would like to do so without measuring the running time for every size.

The first thing that I tried was to find the number which minimises

This was found using depth-first search in a tree whose edges each correspond to incrementing one of .

The result is not resounding. Delving into the FFTW paper, the authors mention that they use highly optimised “codelets” for sizes 2, …, 32, 64. Resorting to (mild) hacks, we can incorporate this into the cost function by decomposing , where to consider a maximum codelet size of . We then introduce a relative acceleration which applies to the first powers of 2. The modified cost function is

The results using and are not perfect but better.