Local Naive Bayes Nearest Neighbor

Code

https://github.com/sanchom/sjm

Dataset

You can use the above code to extract our feature sets. We extracted dense SIFT at multiple scales, trying to match as closely as possible the settings used by Boiman et al. in their original NBNN paper.

FAQ

1. You mention that you discard low contrast features. How did you decide which features were low contrast?

We used the ‘norm’ that is recorded with the keypoints returned from vl_dsift_get_keypoints. We chose 2.0 as the threshold based on a coarse evaluation on a small subset of Caltech 101.

2. How many scales did you use?

We extracted descriptors at 4 scales increasing by a factor of 1.5 each scale: 16×16, 24×24, 36×36, and 54×54.

3. What step-size did you use for vlfeat?

We used a 3 pixel step size.

4. Have you tried sampling only at interest points?

We haven’t tried sampling only at interest points, but only because I believe that much previous work shows that dense sampling gives better performance than interest point sampling for generic object or scene recognition.

Publications

Sancho McCann and David G. Lowe. “Local Naive Bayes Nearest Neighbor for Image Classification.” CVPR, 2012. [pdf]

Sancho McCann and David G. Lowe. “Local Naive Bayes Nearest Neighbor for Image Classification”, Technical Report TR-2011-11, Department of Computer Science, University of British Columbia, 2011. [@ UBC] [@ arXiv]

Sancho McCann and David G. Lowe. “Object Categorization Using Sparse Nearest Neighbor Distances for Improved Accuracy and Scalability”, 1st IEEE Workshop on Kernels and Distances for Computer Vision, 2011. [poster]