How to use fit_generator with multiple inputs

Try this generator: def generator_two_img(X1, X2, y, batch_size): genX1 = gen.flow(X1, y, batch_size=batch_size, seed=1) genX2 = gen.flow(X2, y, batch_size=batch_size, seed=1) while True: X1i = X2i = yield [X1i[0], X2i[0]], X1i[1] Generator for 3 inputs: def generator_three_img(X1, X2, X3, y, batch_size): genX1 = gen.flow(X1, y, batch_size=batch_size, seed=1) genX2 = gen.flow(X2, y, batch_size=batch_size, seed=1) genX3 … Read more

What is the role of TimeDistributed layer in Keras?

In keras – while building a sequential model – usually the second dimension (one after sample dimension) – is related to a time dimension. This means that if for example, your data is 5-dim with (sample, time, width, length, channel) you could apply a convolutional layer using TimeDistributed (which is applicable to 4-dim with (sample, … Read more

Why must a nonlinear activation function be used in a backpropagation neural network? [closed]

The purpose of the activation function is to introduce non-linearity into the network in turn, this allows you to model a response variable (aka target variable, class label, or score) that varies non-linearly with its explanatory variables non-linear means that the output cannot be reproduced from a linear combination of the inputs (which is not … Read more

How do I load a caffe model and convert to a numpy array?

Here’s a nice function that converts a caffe net to a python list of dictionaries, so you can pickle it and read it anyway you want: import caffe def shai_net_to_py_readable(prototxt_filename, caffemodel_filename): net = caffe.Net(prototxt_filename, caffemodel_filename, caffe.TEST) # read the net + weights pynet_ = [] for li in xrange(len(net.layers)): # for each layer in the … Read more

Negative dimension size caused by subtracting 3 from 1 for ‘conv2d_2/convolution’

By default, Convolution2D ( expects the input to be in the format (samples, rows, cols, channels), which is “channels-last”. Your data seems to be in the format (samples, channels, rows, cols). You should be able to fix this using the optional keyword data_format=”channels_first” when declaring the Convolution2D layer. model.add(Convolution2D(32, (3, 3), activation=’relu’, input_shape=(1,28,28), data_format=”channels_first”))

What is the purpose of the add_loss function in Keras?

I’ll try to answer the original question of why model.add_loss() is being used instead of specifying a custom loss function to model.compile(loss=…). All loss functions in Keras always take two parameters y_true and y_pred. Have a look at the definition of the various standard loss functions available in Keras, they all have these two parameters. … Read more

Keras Text Preprocessing – Saving Tokenizer object to file for scoring

The most common way is to use either pickle or joblib. Here you have an example on how to use pickle in order to save Tokenizer: import pickle # saving with open(‘tokenizer.pickle’, ‘wb’) as handle: pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL) # loading with open(‘tokenizer.pickle’, ‘rb’) as handle: tokenizer = pickle.load(handle)

Cost function training target versus accuracy desired goal

How can we train a neural network so that it ends up maximizing classification accuracy? I’m asking for a way to get a continuous proxy function that’s closer to the accuracy To start with, the loss function used today for classification tasks in (deep) neural nets was not invented with them, but it goes back … Read more