Places CNN

MIT Computer Science and Artificial Intelligence Laboratory

CNN trained on Places Database could be directly used for scene recognition, while the deep scene features from the higher level layer of CNN could be used as generic features for visual recognition. We share the following pre-trained CNNs using Caffe deep learning toolbox. For each CNN, we provide the network deploy file and the trained network model which could be directly loaded using Caffe. Please cite our NIPS'14 paper if you use these CNNs.

Places-CNNs available to be downloaded

  • Places205-AlexNet: AlexNet CNN trained on 205 scene categories of Places Database with  2.5 million images (this CNN is used in our NIPS’14 paper). Scene attribute detectors associated with the FC7 feature of the Places205-AlexNet could be downloaded here.

  • Hybrid-AlexNet: AlexNet CNN trained on 1183 categories (205 scene categories from Places Database and 978 object categories from the train data of ILSVRC2012 (ImageNet) with  3.6 million images (this CNN is used in our NIPS’14 paper).

  • Places205-GoogLeNet: GoogLeNet CNN trained on 205 scene categories of Places Database with  2.5 million images, with top1 accuracy as 0.5567 and top5 accuracy as 0.8541 on the validation set.

  • Places205-VGG: VGG-16 CNN trained on 205 scene categories of Places Database with  2.5 million images, with top1 accuracy as 0.5890 and top5 accuracy as 0.8770 on the test set of Places205 using the standard 10-crop for each test image.

New:Places365-CNNs are available, with more categories predicted than the Places205-CNNs.

New:Single script to use PlacesCNN for prediction in PyTorch