Minerva alternatives and similar libraries
Based on the "Machine Learning" category.
Alternatively, view Minerva alternatives based on common mentions on social networks and blogs.
Caffe9.9 0.0 L1 Minerva VS CaffeCaffe: a fast open framework for deep learning.
xgboost9.8 8.4 L1 Minerva VS xgboostScalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
Dlib9.5 9.0 L1 Minerva VS DlibA toolkit for making real world machine learning and data analysis applications in C++
Caffe29.3 0.0 Minerva VS Caffe2A lightweight, modular, and scalable deep learning framework. [Apache2] website
vowpal_wabbit9.2 9.7 L1 Minerva VS vowpal_wabbitVowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.
CCV9.0 7.1 L2 Minerva VS CCVC-based/Cached/Core Computer Vision Library, A Modern Computer Vision Library
catboost8.8 10.0 Minerva VS catboostA fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
mlpack8.5 9.7 L1 Minerva VS mlpackmlpack: a fast, header-only C++ machine learning library
SHOGUN8.0 0.0 L1 Minerva VS SHOGUNShōgun
Porcupine7.5 8.6 Minerva VS PorcupineOn-device wake word detection powered by deep learning
RNNLIB5.6 0.0 Minerva VS RNNLIBRNNLIB is a recurrent neural network library for sequence learning problems. Forked from Alex Graves work http://sourceforge.net/projects/rnnl/
MeTA5.4 0.0 L3 Minerva VS MeTAA Modern C++ Data Sciences Toolkit
Fido4.0 0.0 L5 Minerva VS FidoA lightweight C++ machine learning library for embedded electronics and robotics.
Recommender3.6 0.0 L4 Minerva VS RecommenderA C library for product recommendations/suggestions using collaborative filtering (CF)
NN++3.3 0.0 L3 Minerva VS NN++A small and easy to use neural net implementation for C++. Just download and #include!
faiss-server2.6 0.0 Minerva VS faiss-serverfaiss serving :)
OpenHotspot1.5 0.0 L4 Minerva VS OpenHotspotOpenHotspot is a machine learning, crime analysis framework written in C++11.
gaenari1.3 1.3 Minerva VS gaenaric++ incremental decision tree
sofia-mlThe suite of fast incremental algorithms for machine learning. [Apache2]
Write Clean C++ Code. Always.
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of Minerva or a related project?
Minerva: a fast and flexible system for deep learning
- We've cleared quite a lot of Minerva's dependencies and made it easier to build. Basically, almost all needed are:
Please see the wiki page for more information.
- Minerva's Tutorial and API documents are released!
- Minerva had migrated to dmlc, where you could find many awesome machine learning repositories!
- Minerva now evolves to use cudnn_v2. Please download and use the new library.
- Minerva now supports the latest version of Caffe's network configuration protobuf format. If you are using older version, error may occur. Please use the tool to upgrade the configure file.
Minerva is a fast and flexible tool for deep learning. It provides NDarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy. Please refer to the examples to see how multi-GPU setting is used.Minerva is a fast and flexible tool for deep learning. It provides NDarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy. Please refer to the examples to see how multi-GPU setting is used.
After building and installing Minerva and Owl package (python binding) as in Install Minerva. Try run
./run_owl_shell.sh in Minerva's root directory. And enter:
>>> x = owl.ones([10, 5]) >>> y = owl.ones([10, 5]) >>> z = x + y >>> z.to_numpy()
The result will be a 10x5 array filled by value 2. Minerva supports many
numpy style ndarray operations. Please see the API document for more information.
- N-D array programming interface and easy integration with
>>> import numpy as np >>> x = np.array([1, 2, 3]) >>> y = owl.from_numpy(x) >>> y += 1 >>> y.to_numpy() array([ 2., 3., 4., ], dtype=float32)
More is in the API cheatsheet
- Automatically parallel execution
>>> x = owl.zeros([256, 128]) >>> y = owl.randn([1024, 32], 0.0, 0.01)
y will be executed concurrently. How is this achieved?
See Feature Highlight: Data-flow and lazy evaluation
- Multi-GPU, multi-CPU support:
>>> owl.set_device(gpu0) >>> x = owl.zeros([256, 128]) >>> owl.set_device(gpu1) >>> y = owl.randn([1024, 32], 0.0, 0.01)
y will be executed on two cards simultaneously. How is this achieved?
See Feature Highlight: Multi GPU Training
Tutorial and Documents
- Tutorials and high-level concepts could be found in our wiki page
- A step-by-step walk through on MNIST example could be found here
- We also built a tool to directly read Caffe's configure file and train. See document.
- API documents could be found here
We will keep updating the latest performance we could achieve in this section.
|Training speed (images/second)||AlexNet||VGGNet||GoogLeNet|
- The performance is measured on a machine with 4 GTX Titan cards.
- On each card, we load minibatch size of 256, 24, 120 for AlexNet, VGGNet and GoogLeNet respectively. Therefore, the total minibatch size will increase as the number of cards grows (for example, training AlexNet on 4 cards will use 1024 minibatch size).
An end-to-end training
We also provide some end-to-end training codes in
owl package, which could load Caffe's model file and perform training. Note that, Minerva is not the same tool as Caffe. We are not focusing on this part of logic. In fact, we implement these just to play with the Minerva's powerful and flexible programming interface (we could implement a Caffe-like network trainer in around 700~800 lines of python codes). Here is the training error with time compared with Caffe. Note that Minerva could finish GoogleNet training in less than four days with four GPU cards.
Testing error rate
We trained several models using Minerva from scratch to show the correctness. The following table shows the error rate of different network under different testing settings.
|Testing error rate||AlexNet||VGGNet||GoogLeNet|
|single view top-1||41.6%||31.6%||32.7%|
|multi view top-1||39.7%||30.1%||31.3%|
|single view top-5||18.8%||11.4%||11.8%|
|multi view top-5||17.5%||10.8%||11.0%|
- AlexNet is trained with the solver except that we didn't use multi-group convolution.
- GoogLeNet is trained with the quick_solver.
- We didn't train VGGNet from scratch. We just transform the model into Minerva format and testing.
The models can be found in the following link: AlexNet GoogLeNet VGGNet
You can download the trained models and try them on your own machine using net_tester script.
- Get rid of boost library dependency by using Cython. (DONE)
- Large scale LSTM example using Minerva.
- Easy support for user-defined new operations.
License and support
Minerva is provided in the Apache V2 open source license.
You can use the "issues" tab in github to report bugs. For non-bug issues, please send up an email at [email protected]. You can subscribe to the discussion group: https://groups.google.com/forum/#!forum/minerva-support.
For more information on how to install, use or contribute to Minerva, please visit our wiki page: https://github.com/minerva-developers/minerva/wiki
*Note that all licence references and agreements mentioned in the Minerva README section above are relevant to that project's source code only.