How to calculate the statistics “t-test” with numpy
In a scipy.stats package there are few ttest_… functions. See example from here: >>> print ‘t-statistic = %6.3f pvalue = %6.4f’ % stats.ttest_1samp(x, m) t-statistic = 0.391 pvalue = 0.6955
In a scipy.stats package there are few ttest_… functions. See example from here: >>> print ‘t-statistic = %6.3f pvalue = %6.4f’ % stats.ttest_1samp(x, m) t-statistic = 0.391 pvalue = 0.6955
Take a look at this answer for fitting arbitrary curves to data. Basically you can use scipy.optimize.curve_fit to fit any function you want to your data. The code below shows how you can fit a Gaussian to some random data (credit to this SciPy-User mailing list post). import numpy from scipy.optimize import curve_fit import matplotlib.pyplot … Read more
There is no automatic feature to do such a thing, but you could loop through each point and put text in the appropriate location: import matplotlib.pyplot as plt import numpy as np data = np.random.rand(5, 4) heatmap = plt.pcolor(data) for y in range(data.shape[0]): for x in range(data.shape[1]): plt.text(x + 0.5, y + 0.5, ‘%.4f’ % … Read more
import numpy as np import scipy import scipy.ndimage as ndimage import scipy.ndimage.filters as filters import matplotlib.pyplot as plt fname=”/tmp/slice0000.png” neighborhood_size = 5 threshold = 1500 data = scipy.misc.imread(fname) data_max = filters.maximum_filter(data, neighborhood_size) maxima = (data == data_max) data_min = filters.minimum_filter(data, neighborhood_size) diff = ((data_max – data_min) > threshold) maxima[diff == 0] = 0 labeled, num_objects … Read more
I copied your example and tried a little bit. Looks like if you stick with BFGS solver, after a few iteration the mu+ alpha * r will have some negative numbers, and that’s how you get the RuntimeWarning. The easiest fix I can think of is to switch to Nelder Mead solver. res = minimize(loglikelihood, … Read more
You can remove NaNs using a mask: mask = ~np.isnan(varx) & ~np.isnan(vary) slope, intercept, r_value, p_value, std_err = stats.linregress(varx[mask], vary[mask])
The distributions in scipy are coded in a generic way wrt two parameter location and scale so that location is the parameter (loc) which shifts the distribution to the left or right, while scale is the parameter which compresses or stretches the distribution. For the two parameter lognormal distribution, the “mean” and “std dev” correspond … Read more
First of all, at the moment SymPy does not guarantee support for numpy arrays which is what you want in this case. Check this bug report http://code.google.com/p/sympy/issues/detail?id=537 Second, If you want to evaluate something numerically for many values SymPy is not the best choice (it is a symbolic library after all). Use numpy and scipy. … Read more
You can use the scipy.sparse.hstack to concatenate sparse matrices with the same number of rows (horizontal concatenation): from scipy.sparse import hstack hstack((X, X2)) Similarly, you can use scipy.sparse.vstack to concatenate sparse matrices with the same number of columns (vertical concatenation). Using numpy.hstack or numpy.vstack will create an array with two sparse matrix objects.
np.isnan combined with np.argwhere x = np.array([[1,2,3,4], [2,3,np.nan,5], [np.nan,5,2,3]]) np.argwhere(np.isnan(x)) output: array([[1, 2], [2, 0]])