ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4

I solved the problem by making input size: (95000,360,1) and output size: (95000,22) and changed the input shape to (360,1) in the code where model is defined: model = Sequential() model.add(LSTM(22, input_shape=(360,1))) model.add(Dense(22, activation=’softmax’)) model.compile(loss=”categorical_crossentropy”, optimizer=”adam”, metrics=[‘accuracy’]) print(model.summary()) model.fit(ml2_train_input, ml2_train_output_enc, epochs=2, batch_size=500)

expected ndim=3, found ndim=2

LSTM layer expects inputs to have shape of (batch_size, timesteps, input_dim). In keras you need to pass (timesteps, input_dim) for input_shape argument. But you are setting input_shape (9,). This shape does not include timesteps dimension. The problem can be solved by adding extra dimension to input_shape for time dimension. E.g adding extra dimension with value … Read more

Neural Network LSTM input shape from dataframe

Below is an example that sets up time series data to train an LSTM. The model output is nonsense as I only set it up to demonstrate how to build the model. import pandas as pd import numpy as np # Get some time series data df = pd.read_csv(“https://raw.githubusercontent.com/plotly/datasets/master/timeseries.csv”) df.head() Time series dataframe: Date A … Read more

TensorFlow: Remember LSTM state for next batch (stateful LSTM)

I found out it was easiest to save the whole state for all layers in a placeholder. init_state = np.zeros((num_layers, 2, batch_size, state_size)) … state_placeholder = tf.placeholder(tf.float32, [num_layers, 2, batch_size, state_size]) Then unpack it and create a tuple of LSTMStateTuples before using the native tensorflow RNN Api. l = tf.unpack(state_placeholder, axis=0) rnn_tuple_state = tuple( [tf.nn.rnn_cell.LSTMStateTuple(l[idx][0], … Read more

Error when checking model input: expected lstm_1_input to have 3 dimensions, but got array with shape (339732, 29)

Setting timesteps = 1 (since, I want one timestep for each instance) and reshaping the X_train and X_test as: import numpy as np X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1])) X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1])) This worked!

Proper way to feed time-series data to stateful LSTM?

The answer is: depends on problem at hand. For your case of one-step prediction – yes, you can, but you don’t have to. But whether you do or not will significantly impact learning. Batch vs. sample mechanism (“see AI” = see “additional info” section) All models treat samples as independent examples; a batch of 32 … Read more

When does keras reset an LSTM state?

Cheking with some tests, I got to the following conclusion, which is according to the documentation and to Nassim’s answer: First, there isn’t a single state in a layer, but one state per sample in the batch. There are batch_size parallel states in such a layer. Stateful=False In a stateful=False case, all the states are … Read more

Keras : How should I prepare input data for RNN?

If you only want to predict the output using the most recent 5 inputs, there is no need to ever provide the full 600 time steps of any training sample. My suggestion would be to pass the training data in the following manner: t=0 t=1 t=2 t=3 t=4 t=5 … t=598 t=599 sample0 |———————| sample0 … Read more

tech