Popularity
8.3
Growing
Activity
9.3
Declining
2,844
208
786

Programming language: Fortran
Tags: Math    
Latest version: v0.3.7

OpenBLAS alternatives and similar libraries

Based on the "Math" category

  • GLM

    Header-only C++ math library that matches and inter-operates with OpenGL's GLSL math. [MIT]
  • QuantLib

    A free/open-source library for quantitative finance. [Modified BSD] website
  • CGal

    Collection of efficient and reliable geometric algorithms. [LGPL&GPL]
  • Eigen

    A high-level C++ library of template headers for linear algebra, matrix and vector operations, numerical solvers and related algorithms. [MPL2]
  • ceres-solver

    C++ library for modeling and solving large complicated nonlinear least squares problems from google. [BSD]
  • Vc

    5.5 6.7 L1 OpenBLAS VS Vc
    SIMD Vector Classes for C++. [BSD]
  • NT2

    A SIMD-optimized numerical template library that provides an interface with MATLAB-like syntax. [Boost]
  • TinyExpr

    tiny recursive descent expression parser, compiler, and evaluation engine for math expressions
  • MIRACL

    A Multiprecision Integer and Rational Arithmetic Cryptographic Library. [AGPL]
  • linmath.h

    A lean linear math library, aimed at graphics programming. [WTFPL]
  • LibTomMath

    A free open source portable number theoretic multiple-precision integer library written entirely in C. [PublicDomain & WTFPL] website
  • ExprTK

    The C++ Mathematical Expression Toolkit Library (ExprTk) is a simple to use, easy to integrate and extremely efficient run-time mathematical expression parser and evaluation engine. [CPL]
  • GMTL

    Graphics Math Template Library is a collection of tools implementing Graphics primitives in generalized ways. [GPL2]
  • muparser

    muParser is an extensible high performance math expression parser library written in C++. [MIT]
  • blaze

    high-performance C++ math library for dense and sparse arithmetic. [BSD]
  • Apophenia

    A C library for statistical and scientific computing [GPL2]
  • Boost.Multiprecision

    provides higher-range/precision integer, rational and floating-point types in C++, header-only or with GMP/MPFR/LibTomMath backends. [Boost]
  • safe_numerics

    Replacements to standard numeric types which throw exceptions on errors
  • Wykobi

    A C++ library of efficient, robust and simple to use C++ 2D/3D oriented computational geometry routines. [MIT]
  • cml

    free C++ math library for games and graphics. [Boost]
  • Versor

    A (fast) Generic C++ library for Geometric Algebras, including Euclidean, Projective, Conformal, Spacetime (etc).
  • metamath

    metamath is a tiny header-only library. It can be used for symbolic computations on single-variable functions, such as dynamic computations of derivatives.
  • Xerus

    A general purpose library for numerical calculations with higher order tensors, Tensor-Train Decompositions / Matrix Product States and other Tensor Networks
  • GMP

    A C/C++ library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating-point numbers. [LGPL3 & GPL2]
  • Armadillo

    A high quality C++ linear algebra library, aiming towards a good balance between speed and ease of use. The syntax (API) is deliberately similar to Matlab. [MPL2]

Do you think we are missing an alternative of OpenBLAS or a related project?

Add another 'Math' Library

README

OpenBLAS

Join the chat at https://gitter.im/xianyi/OpenBLAS

Travis CI: Build Status

AppVeyor: Build status

Introduction

OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.

Please read the documents on OpenBLAS wiki pages http://github.com/xianyi/OpenBLAS/wiki.

Binary Packages

We provide binary packages for the following platform.

  • Windows x86/x86_64

You can download them from file hosting on sourceforge.net.

Installation from Source

Download from project homepage. http://xianyi.github.com/OpenBLAS/

Or, check out codes from git://github.com/xianyi/OpenBLAS.git

Normal compile

  • type "make" to detect the CPU automatically. or
  • type "make TARGET=xxx" to set target CPU, e.g. "make TARGET=NEHALEM". The full target list is in file TargetList.txt.

Cross compile

Please set CC and FC with the cross toolchains. Then, set HOSTCC with your host C compiler. At last, set TARGET explicitly.

Examples:

On X86 box, compile this library for loongson3a CPU.

make BINARY=64 CC=mips64el-unknown-linux-gnu-gcc FC=mips64el-unknown-linux-gnu-gfortran HOSTCC=gcc TARGET=LOONGSON3A

On X86 box, compile this library for loongson3a CPU with loongcc (based on Open64) compiler.

make CC=loongcc FC=loongf95 HOSTCC=gcc TARGET=LOONGSON3A CROSS=1 CROSS_SUFFIX=mips64el-st-linux-gnu-   NO_LAPACKE=1 NO_SHARED=1 BINARY=32

Debug version

make DEBUG=1

Compile with MASS Support on Power CPU (Optional dependency)

IBM MASS library consists of a set of mathematical functions for C, C++, and Fortran-language applications that are tuned for optimum performance on POWER architectures. OpenBLAS with MASS requires 64-bit, little-endian OS on POWER. The library can be installed as below -

After installing MASS library, compile openblas with USE_MASS=1.

Example:

Compiling on Power8 with MASS support -

make USE_MASS=1 TARGET=POWER8

Install to the directory (optional)

Example:

make install PREFIX=your_installation_directory

The default directory is /opt/OpenBLAS

Support CPU & OS

Please read GotoBLAS_01Readme.txt

Additional support CPU:

x86/x86-64:

  • Intel Xeon 56xx (Westmere): Used GotoBLAS2 Nehalem codes.
  • Intel Sandy Bridge: Optimized Level-3 and Level-2 BLAS with AVX on x86-64.
  • Intel Haswell: Optimized Level-3 and Level-2 BLAS with AVX2 and FMA on x86-64.
  • AMD Bobcat: Used GotoBLAS2 Barcelona codes.
  • AMD Bulldozer: x86-64 ?GEMM FMA4 kernels. (Thank Werner Saar)
  • AMD PILEDRIVER: Uses Bulldozer codes with some optimizations.
  • AMD STEAMROLLER: Uses Bulldozer codes with some optimizations.

MIPS64:

  • ICT Loongson 3A: Optimized Level-3 BLAS and the part of Level-1,2.
  • ICT Loongson 3B: Experimental

ARM:

  • ARMV6: Optimized BLAS for vfpv2 and vfpv3-d16 ( e.g. BCM2835, Cortex M0+ )
  • ARMV7: Optimized BLAS for vfpv3-d32 ( e.g. Cortex A8, A9 and A15 )

ARM64:

  • ARMV8: Experimental
  • ARM Cortex-A57: Experimental

IBM zEnterprise System:

  • Z13: Optimized Level-3 BLAS

Support OS:

Usages

Link with libopenblas.a or -lopenblas for shared library.

Set the number of threads with environment variables.

Examples:

export OPENBLAS_NUM_THREADS=4

or

export GOTO_NUM_THREADS=4

or

export OMP_NUM_THREADS=4

The priorities are OPENBLAS_NUM_THREADS > GOTO_NUM_THREADS > OMP_NUM_THREADS.

If you compile this lib with USE_OPENMP=1, you should set OMP_NUM_THREADS environment variable. OpenBLAS ignores OPENBLAS_NUM_THREADS and GOTO_NUM_THREADS with USE_OPENMP=1.

Set the number of threads on runtime.

We provided the below functions to control the number of threads on runtime.

void goto_set_num_threads(int num_threads);

void openblas_set_num_threads(int num_threads);

If you compile this lib with USE_OPENMP=1, you should use the above functions, too.

Report Bugs

Please add a issue in https://github.com/xianyi/OpenBLAS/issues

Contact

ChangeLog

Please see Changelog.txt to obtain the differences between GotoBLAS2 1.13 BSD version.

Troubleshooting

  • Please read Faq at first.
  • Please use gcc version 4.6 and above to compile Sandy Bridge AVX kernels on Linux/MingW/BSD.
  • Please use Clang version 3.1 and above to compile the library on Sandy Bridge microarchitecture. The Clang 3.0 will generate the wrong AVX binary code.
  • The number of CPUs/Cores should less than or equal to 256. On Linux x86_64(amd64), there is experimental support for up to 1024 CPUs/Cores and 128 numa nodes if you build the library with BIGNUMA=1.
  • OpenBLAS does not set processor affinity by default. On Linux, you can enable processor affinity by commenting the line NO_AFFINITY=1 in Makefile.rule. But this may cause the conflict with R parallel.
  • On Loongson 3A. make test would be failed because of pthread_create error. The error code is EAGAIN. However, it will be OK when you run the same testcase on shell.

Contributing

  1. Check for open issues or open a fresh issue to start a discussion around a feature idea or a bug.
  2. Fork the OpenBLAS repository to start making your changes.
  3. Write a test which shows that the bug was fixed or that the feature works as expected.
  4. Send a pull request. Make sure to add yourself to CONTRIBUTORS.md.

Donation

Please read this wiki page.