## Is there any fast method of matrix exponentiation?

You could factor the matrix into eigenvalues and eigenvectors. Then you get M = V * D * V^-1 Where V is the eigenvector matrix and D is a diagonal matrix. To raise this to the Nth power, you get something like: M^n = (V * D * V^-1) * (V * D * V^-1) … Read more

Categories r

## Equivalent of `polyfit` for a 2D polynomial in Python

Here is an example showing how you can use numpy.linalg.lstsq for this task: import numpy as np x = np.linspace(0, 1, 20) y = np.linspace(0, 1, 20) X, Y = np.meshgrid(x, y, copy=False) Z = X**2 + Y**2 + np.random.rand(*X.shape)*0.01 X = X.flatten() Y = Y.flatten() A = np.array([X*0+1, X, Y, X**2, X**2*Y, X**2*Y**2, Y**2, … Read more

## Algorithm for intersection of 2 lines?

Assuming you have two lines of the form Ax + By = C, you can find it pretty easily: float delta = A1 * B2 – A2 * B1; if (delta == 0) throw new ArgumentException(“Lines are parallel”); float x = (B2 * C1 – B1 * C2) / delta; float y = (A1 * … Read more

Categories c#

## Fitting a line in 3D

If you are trying to predict one value from the other two, then you should use lstsq with the a argument as your independent variables (plus a column of 1’s to estimate an intercept) and b as your dependent variable. If, on the other hand, you just want to get the best fitting line to … Read more

## Solving a linear equation

Cramer’s Rule and Gaussian Elimination are two good, general-purpose algorithms (also see Simultaneous Linear Equations). If you’re looking for code, check out GiNaC, Maxima, and SymbolicC++ (depending on your licensing requirements, of course). EDIT: I know you’re working in C land, but I also have to put in a good word for SymPy (a computer … Read more

## Is Google’s Android OpenGL tutorial teaching incorrect linear algebra?

As the guy who wrote that OpenGL tutorial, I can confirm that the example code is incorrect. Specifically, the order of the factors in the shader code should be reversed: ” gl_Position = uMVPMatrix * vPosition;” As to the application of the rotation matrix, the order of the factors should also be reversed so that … Read more

## Why does numpy.linalg.solve() offer more precise matrix inversions than numpy.linalg.inv()?

np.linalg.solve(A, b) does not compute the inverse of A. Instead it calls one of the gesv LAPACK routines, which first factorizes A using LU decomposition, then solves for x using forward and backward substitution (see here). np.linalg.inv uses the same method to compute the inverse of A by solving for A-1 in A·A-1 = I … Read more

## numpy arbitrary precision linear algebra

SymPy can calculate arbitrary precision: from sympy import exp, N, S from sympy.matrices import Matrix data = [[S(“-800.21”),S(“-600.00”)],[S(“-600.00”),S(“-1000.48”)]] m = Matrix(data) ex = m.applyfunc(exp).applyfunc(lambda x:N(x, 100)) vecs = ex.eigenvects() print vecs[0][0] # eigen value print vecs[1][0] # eigen value print vecs[0][2] # eigen vect print vecs[1][2] # eigen vect output: -2.650396553004310816338679447269582701529092549943247237903254759946483528035516341807463648841185335e-261 2.650396553004310816338679447269582701529092549943247237903254759946483528035516341807466621962539464e-261 [[-0.9999999999999999999999999999999999999999999999999999999999999999999999999999999999999994391176386872] [ 1]] … Read more

## Minimizing overhead due to the large number of Numpy dot calls

It depends on the size of the matrices Edit For larger nxn matrices (aprox. size 20) a BLAS call from compiled code is faster, for smaller matrices custom Numba or Cython Kernels are usually faster. The following method generates custom dot- functions for given input shapes. With this method it is also possible to benefit … Read more