Using pyGASP; Python Signal Processing(FFT,DWT,DCT) library with GPU-acceleration via pyCUDA

I came across pyGASP while I was working on my Image Deconvolution research. It seems to be one of the only python tools which provides "GPU-accellerated" Discrete Wavelet Transforms. It features a barebones API similar to pywt. Sadly the docs and "performance" are a bit lacking, so here are some of my notes on getting it working and benchmarking it a bit.. Turns out that the pyGASP GPU code is about 5x slower than the CPU-based pywt (At least in my test case)

Getting Started...

Installing pyGASP:

Easiest way to install pyGASP is using pip or a similar tool.

$> sudo pip install pygasp


The official documentation can be found here:
and here:

The docstring generated documentation is not too bad and certainly gives you the basics. The README on the other hand is a out of date. Following are my notes on evaluating it.

pywt vs pyGASP; Benchmarking the 2D wavelet transform:

I am only going to compare dwt2 between these two packages. I, perhaps wrongly, assume other comparisons would yield similar results.

import numpy as np
import scipy.misc
import pylab
from datetime import datetime as dt

import pywt
import pygasp.dwt.dwt as pygaspDWT
import pygasp.dwt.dwtCuda as pygaspDWTgpu

def show(data):

# Lets get an image to play with.
img = scipy.misc.lena().astype(np.float32)

# pywt
s =
res_pywt = pywt.dwt2(img, "haar", "zpd")
print "pywt took:",

# pygasp CPU version
s =
res_gasp = pygaspDWT.dwt2(img, "haar", "zpd")
print "pygaspCPU took:",

# pygasp GPU version
s =
res_gaspGPU = pygaspDWTgpu.dwt2(img, "haar", "zpd")
print "pygaspGPU took:",

# if you want to view the results

Now that we have a basic comparison, lets grab a larger image and try it again:

$> wget -O largeTest.jpg

And add the following:

# add this at the top
from PIL import Image
# replace the lena image with this:
imgObj ="largeTest.jpg")
img = np.array(imgObj)

Sadly the results with the larger image are quite disappointing. I had hoped that the pyGASP GPU code would be at least as fast as the CPU-based pywt.

$> python
pywt took: 0:00:01.412802
pygaspCPU took: 0:01:34.589889
pygaspGPU took: 0:00:06.963826

Even though the pyGASP GPU version is 13.5 times faster than its own CPU equivalent, the pywt CPU version is another ~5 times faster!! Perhaps it is not yet prime time for this library, but it might be a starting point to get a truely GPU accelerated version going. These tests were performed on a Nvidia Tesla K20c. For now I will have to venture on to find another faster solution, but I might come back to this and work on optimizing the it to suit my needs. Sadly there is no public code repo available.

Edit: Looks like pyGASP is related to this paper