Solving a linear equation

Cramer’s Rule and Gaussian Elimination are two good, general-purpose algorithms (also see Simultaneous Linear Equations). If you’re looking for code, check out GiNaC, Maxima, and SymbolicC++ (depending on your licensing requirements, of course). EDIT: I know you’re working in C land, but I also have to put in a good word for SymPy (a computer … Read more

Why does numpy.linalg.solve() offer more precise matrix inversions than numpy.linalg.inv()?

np.linalg.solve(A, b) does not compute the inverse of A. Instead it calls one of the gesv LAPACK routines, which first factorizes A using LU decomposition, then solves for x using forward and backward substitution (see here). np.linalg.inv uses the same method to compute the inverse of A by solving for A-1 in A·A-1 = I … Read more

numpy arbitrary precision linear algebra

SymPy can calculate arbitrary precision: from sympy import exp, N, S from sympy.matrices import Matrix data = [[S(“-800.21”),S(“-600.00”)],[S(“-600.00”),S(“-1000.48”)]] m = Matrix(data) ex = m.applyfunc(exp).applyfunc(lambda x:N(x, 100)) vecs = ex.eigenvects() print vecs[0][0] # eigen value print vecs[1][0] # eigen value print vecs[0][2] # eigen vect print vecs[1][2] # eigen vect output: -2.650396553004310816338679447269582701529092549943247237903254759946483528035516341807463648841185335e-261 2.650396553004310816338679447269582701529092549943247237903254759946483528035516341807466621962539464e-261 [[-0.9999999999999999999999999999999999999999999999999999999999999999999999999999999999999994391176386872] [ 1]] … Read more

Minimizing overhead due to the large number of Numpy dot calls

It depends on the size of the matrices Edit For larger nxn matrices (aprox. size 20) a BLAS call from compiled code is faster, for smaller matrices custom Numba or Cython Kernels are usually faster. The following method generates custom dot- functions for given input shapes. With this method it is also possible to benefit … Read more