Eigen » General topics » Using BLAS/LAPACK from Eigen

Since Eigen version 3.3 and later, any F77 compatible BLAS or LAPACK libraries can be used as backends for dense matrix products and dense matrix decompositions. For instance, one can use Intel® MKL, Apple's Accelerate framework on OSX, OpenBLAS, Netlib LAPACK, etc.

Do not miss this page for further discussions on the specific use of Intel® MKL (also includes VML, PARDISO, etc.)

In order to use an external BLAS and/or LAPACK library, you must link you own application to the respective libraries and their dependencies. For LAPACK, you must also link to the standard Lapacke library, which is used as a convenient think layer between Eigen's C++ code and LAPACK F77 interface. Then you must activate their usage by defining one or multiple of the following macros (before including any Eigen's header):

EIGEN_USE_BLASEnables the use of external BLAS level 2 and 3 routines (compatible with any F77 BLAS interface)
EIGEN_USE_LAPACKEEnables the use of external Lapack routines via the Lapacke C interface to Lapack (compatible with any F77 LAPACK interface)
EIGEN_USE_LAPACKE_STRICTSame as EIGEN_USE_LAPACKE but algorithms of lower numerical robustness are disabled.
This currently concerns only JacobiSVD which otherwise would be replaced by gesvd that is less robust than Jacobi rotations.

When doing so, a number of Eigen's algorithms are silently substituted with calls to BLAS or LAPACK routines. These substitutions apply only for Dynamic or large enough objects with one of the following four standard scalar types: float, double, complex<float>, and complex<double>. Operations on other scalar types or mixing reals and complexes will continue to use the built-in algorithms.

The breadth of Eigen functionality that can be substituted is listed in the table below.

Functional domainCode exampleBLAS/LAPACK routines
Matrix-matrix operations
EIGEN_USE_BLAS
m1*m2.transpose();
m1.selfadjointView<Lower>()*m2;
m1*m2.triangularView<Upper>();
m1.selfadjointView<Lower>().rankUpdate(m2,1.0);
?gemm
?symm/?hemm
?trmm
dsyrk/ssyrk
Matrix-vector operations
EIGEN_USE_BLAS
m1.adjoint()*b;
m1.selfadjointView<Lower>()*b;
m1.triangularView<Upper>()*b;
?gemv
?symv/?hemv
?trmv
LU decomposition
EIGEN_USE_LAPACKE
EIGEN_USE_LAPACKE_STRICT
v1 = m1.lu().solve(v2);
?getrf
Cholesky decomposition
EIGEN_USE_LAPACKE
EIGEN_USE_LAPACKE_STRICT
v1 = m2.selfadjointView<Upper>().llt().solve(v2);
?potrf
QR decomposition
EIGEN_USE_LAPACKE
EIGEN_USE_LAPACKE_STRICT
m1.householderQr();
m1.colPivHouseholderQr();
?geqrf
?geqp3
Singular value decomposition
EIGEN_USE_LAPACKE
JacobiSVD<MatrixXd> svd;
svd.compute(m1, ComputeThinV);
?gesvd
Eigen-value decompositions
EIGEN_USE_LAPACKE
EIGEN_USE_LAPACKE_STRICT
EigenSolver<MatrixXd> es(m1);
ComplexEigenSolver<MatrixXcd> ces(m1);
SelfAdjointEigenSolver<MatrixXd> saes(m1+m1.transpose());
GeneralizedSelfAdjointEigenSolver<MatrixXd>
    gsaes(m1+m1.transpose(),m2+m2.transpose());
?gees
?gees
?syev/?heev
?syev/?heev,
?potrf
Schur decomposition
EIGEN_USE_LAPACKE
EIGEN_USE_LAPACKE_STRICT
RealSchur<MatrixXd> schurR(m1);
ComplexSchur<MatrixXcd> schurC(m1);
?gees

In the examples, m1 and m2 are dense matrices and v1 and v2 are dense vectors.