We have made a number of datasets available to the research community in case they would like to reproduce our findings or use the data in their own research work.
Images for evaluating the performance of feature detectors
The majority of work on evaluating the performance of feature detectors (SIFT, SURF etc) uses the same database of images. This is a good thing in principle but this database is rather small, leaving one wondering whether the measured differences in performance are statistically significant. In an effort to answer this, we have collected a database of some 520 images, about ten times the size of the standard one.
All the images in this database were captured using a Nikon D300 camera equipped with a Nikkor 18–200 mm lens. Images were captured in NEF format and converted into 8-bit PPM format using dcraw. Subseqeuent processing converted the RGB images to grey-scale and then reduced their size by averaging 3 × 3-pixel regions to a single pixel. This was done using the following python script, which makes use of our EVE package:
#!/usr/bin/env python import eve, sys for fn in sys.argv[1:]: ofn = fn[:-3] + 'pgm' print fn, '->', ofn im = eve.image (fn) im = eve.mono (im) nim = eve.reduce (im, 3) eve.output (nim, ofn)
The database is partitioned into four subsets:
The Campus 1 and Campus 2 sets consist of outdoor shots that exhibit good detail. The Indoor set tends to be of a single object on a uniform background, while the Snowy set is of outside scenes (mostly of the town of Wivenhoe), taken after a snowfall. It is known that some feature detectors work less well on low-texture images, and this can be explored by comparing the performances on the Campus sets with the Indoor and Snowy sets.
Derived from these images are a number of datasets that exhibit successive amounts of artificially-introduced degradation: