I'm building a multi-class text model in Keras. The general design for the start involves processing texts with the quanteda package into it's native 'Document-feature matrix' format which is then passed into the model. I've successively built a simple dense NN without problems. The issue starts when I change the model design to an RNN.
My input shape is:
> dim(dfm_train)
[1] 16083 1868
Then my model design looks the following:
model <- keras_model_sequential() %>%
layer_simple_rnn(units = 64, input_shape = c(dim(dfm_train)[[2]]), return_sequences = TRUE) %>%
layer_dropout(rate = 0.5) %>%
layer_simple_rnn(units = 64, return_sequences = TRUE) %>%
layer_dropout(rate = 0.5) %>%
layer_dense(units = 16, activation = "softmax")
And the error I receive is the following:
Error in py_call_impl(callable, dots$args, dots$keywords) :
ValueError: Input 0 is incompatible with layer simple_rnn_28: expected ndim=3, found ndim=2
I've been experimenting with return_sequences, changing the input dimension as well as adding layer_reshape into the mix but unfortunately I'm not able to get past this error. Can anyone help?