Caffe v1.0 Release NotesRelease Date: 2017-04-18 // about 4 years ago
🚀 This release marks the convergence of development into a stable, reference release of the framework and a shift into maintenance mode. Let's review the progress culminating in our 1.0:
- research: nearly 4,000 citations, usage by award papers at CVPR/ECCV/ICCV, and tutorials at ECCV'14 and CVPR'15
- industry: adopted by Facebook, NVIDIA, Intel, Sony, Yahoo! Japan, Samsung, Adobe, A9, Siemens, Pinterest, the Embedded Vision Alliance, and more
- community: 250+ contributors, 15k+ subscribers on github, and 7k+ members of the mailing list
- 🏁 development: 10k+ forks, >1 contribution/day on average, and dedicated branches for OpenCL and Windows
- ⚡️ downloads: 10k+ downloads and updates a month, ~50k unique visitors to the home page every two weeks, and >100k unique downloads of the reference models
- winner of the ACM MM open source award 2014 and presented as a talk at ICML MLOSS 2015
Thanks for all of your efforts leading us to Caffe 1.0! Your part in development, community, feedback, and framework usage brought us here. As part of 1.0 we will be welcoming collaborators old and new to join as members of the Caffe core.
Stay tuned for the next steps in DIY deep learning with Caffe. As development is never truly done, there's always 1.1!
Now that 1.0 is done, the next generation of the framework—Caffe2—is ready to keep up the progress on DIY deep learning in research and industry. While Caffe 1.0 development will continue with 1.1, Caffe2 is the new framework line for future development led by Yangqing Jia. Although Caffe2 is a departure from the development line of Caffe 1.0, we are planning a migration path for models just as we have future-proofed Caffe models in the past.
The Caffe Crew
Previous changes from v0.9999
👀 See #880 for details.
gflagsare required. CPU-only Caffe without any GPU / CUDA dependencies is turned on by setting
CPU_ONLY := 1in your
🗄 Deprecations : the new
caffetool includes commands for model training and testing, querying devices, and timing models. The corresponding