module
Catalogue of dense decompositionsThis page presents a catalogue of the dense matrix decompositions offered by Eigen. For an introduction on linear solvers and decompositions, check this page. To get an overview of the true relative speed of the different decompositions, check this benchmark.
Catalogue of decompositions offered by Eigen
Generic information, not Eigen-specific | Eigen-specific | |||||||
---|---|---|---|---|---|---|---|---|
Decomposition | Requirements on the matrix | Speed | Algorithm reliability and accuracy | Rank-revealing | Allows to compute (besides linear solving) | Linear solver provided by Eigen | Maturity of Eigen's implementation | Optimizations |
PartialPivLU | Invertible | Fast | Depends on condition number | - | - | Yes | Excellent | Blocking, Implicit MT |
FullPivLU | - | Slow | Proven | Yes | - | Yes | Excellent | - |
HouseholderQR | - | Fast | Depends on condition number | - | Orthogonalization | Yes | Excellent | Blocking |
ColPivHouseholderQR | - | Fast | Good | Yes | Orthogonalization | Yes | Excellent | Soon: blocking |
FullPivHouseholderQR | - | Slow | Proven | Yes | Orthogonalization | Yes | Average | - |
LLT | Positive definite | Very fast | Depends on condition number | - | - | Yes | Excellent | Blocking |
LDLT | Positive or negative semidefinite | Very fast | Good | - | - | Yes | Excellent | Soon: blocking |
| ||||||||
BDCSVD (divide & conquer) | - | One of the fastest SVD algorithms | Excellent | Yes | Singular values/vectors, least squares | Yes (and does least squares) | Excellent | Blocked bidiagonalization |
JacobiSVD (two-sided) | - | Slow (but fast for small matrices) | Proven | Yes | Singular values/vectors, least squares | Yes (and does least squares) | Excellent | R-SVD |
SelfAdjointEigenSolver | Self-adjoint | Fast-average | Good | Yes | Eigenvalues/vectors | - | Excellent | Closed forms for 2x2 and 3x3 |
ComplexEigenSolver | Square | Slow-very slow | Depends on condition number | Yes | Eigenvalues/vectors | - | Average | - |
EigenSolver | Square and real | Average-slow | Depends on condition number | Yes | Eigenvalues/vectors | - | Average | - |
GeneralizedSelfAdjointEigenSolver | Square | Fast-average | Depends on condition number | - | Generalized eigenvalues/vectors | - | Good | - |
| ||||||||
RealSchur | Square and real | Average-slow | Depends on condition number | Yes | - | - | Average | - |
ComplexSchur | Square | Slow-very slow | Depends on condition number | Yes | - | - | Average | - |
Tridiagonalization | Self-adjoint | Fast | Good | - | - | - | Good | Soon: blocking |
HessenbergDecomposition | Square | Average | Good | - | - | - | Good | Soon: blocking |
Notes:
- 1: There exist two variants of the LDLT algorithm. Eigen's one produces a pure diagonal D matrix, and therefore it cannot handle indefinite matrices, unlike Lapack's one which produces a block diagonal D matrix.
- 2: Eigenvalues, SVD and Schur decompositions rely on iterative algorithms. Their convergence speed depends on how well the eigenvalues are separated.
- 3: Our JacobiSVD is two-sided, making for proven and optimal precision for square matrices. For non-square matrices, we have to use a QR preconditioner first. The default choice, ColPivHouseholderQR, is already very reliable, but if you want it to be proven, use FullPivHouseholderQR instead.
Terminology
- For a real matrix, selfadjoint is a synonym for symmetric. For a complex matrix, selfadjoint is a synonym for hermitian. More generally, a matrix is selfadjoint if and only if it is equal to its adjoint . The adjoint is also called the conjugate transpose.
- A selfadjoint matrix is positive definite if for any non zero vector . In the same vein, it is negative definite if for any non zero vector
A selfadjoint matrix is positive semi-definite if for any non zero vector . In the same vein, it is negative semi-definite if for any non zero vector
- Means the algorithm can work per block, whence guaranteeing a good scaling of the performance for large matrices.
- Means the algorithm can take advantage of multicore processors via OpenMP. "Implicit" means the algortihm itself is not parallelized, but that it relies on parallelized matrix-matrix product routines.
- Means the algorithm is explicitly parallelized to take advantage of multicore processors via OpenMP.
- Means the algorithm is automatically and explicitly unrolled for very small fixed size matrices.