Caffe

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license.

Check out our web image classification demo!

Why Caffe?

Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.

Extensible code fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back. Thanks to these contributors the framework tracks the state-of-the-art in both code and models.

Speed makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still. We believe that Caffe is among the fastest convnet implementations available.

Community: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia. Join our community of brewers on the caffe-users group and Github.

* With the ILSVRC2012-winning SuperVision model and prefetching IO.

Documentation

Notebook Examples

Image Classification and Filter Visualization
Instant recognition with a pre-trained model and a tour of the net interface for visualizing features and parameters layer-by-layer.

Learning LeNet
Define, train, and test the classic LeNet with the Python interface. Fine-tuning for Style Recognition
Fine-tune the ImageNet-trained CaffeNet on new data.

Off-the-shelf SGD for classification
Use Caffe as a generic SGD optimizer to train logistic regression on non-image HDF5 data.

Multilabel Classification with Python Data Layer
Multilabel classification on PASCAL VOC using a Python data layer.

Editing model parameters
How to do net surgery and manually change model parameters for custom use. R-CNN detection
Run a pretrained model as a detector in Python. Siamese network embedding
Extracting features and plotting the Siamese network embedding.

Command Line Examples

ImageNet tutorial
Train and test "CaffeNet" on ImageNet data. LeNet MNIST Tutorial
Train and test "LeNet" on the MNIST handwritten digit data. CIFAR-10 tutorial
Train and test Caffe on CIFAR-10 data.

Fine-tuning for style recognition
Fine-tune the ImageNet-trained CaffeNet on the "Flickr Style" dataset.

Feature extraction with Caffe C++ code.
Extract CaffeNet / AlexNet features using the Caffe utility.

CaffeNet C++ Classification example
A simple example performing image classification using the low-level C++ API.

Web demo
Image classification demo running as a Flask web server. Siamese Network Tutorial
Train and test a siamese network on MNIST data.

Citing Caffe

Please cite Caffe in your publications if it helps your research:

@article, Journal = , Title = , Year = > 

If you do publish a paper where Caffe helped your research, we encourage you to cite the framework for tracking by Google Scholar.

Contacting Us

Join the caffe-users group to ask questions and discuss methods and models. This is where we talk about usage, installation, and applications.

Framework development discussions and thorough bug reports are collected on Issues.

Acknowledgements

The BAIR Caffe developers would like to thank NVIDIA for GPU donation, A9 and Amazon Web Services for a research grant in support of Caffe development and reproducible research in deep learning, and BAIR PI Trevor Darrell for guidance.

The open-source community plays an important and growing role in Caffe’s development. Check out the Github project pulse for recent activity and the contributors for the full list.

We sincerely appreciate your interest and contributions! If you’d like to contribute, please read the developing & contributing guide.

Yangqing would like to give a personal thanks to the NVIDIA Academic program for providing GPUs, Oriol Vinyals for discussions along the journey, and BAIR PI Trevor Darrell for advice.