I don't have your data so I made some synthetic data to replicate your issue. So basically your input data should be an array with dimensions number of subjects * x * y * z * channels, where x*y*z is the dimensions of the MRI cube of image data.
Okay, so here is my simulated data:
# Image data, 10 subjects with images as 100x100x100 pixels, 1 channel (monochromatic)
# Data are between 0,1 (corresponding to 0 to 255 rescaled)
train_x <- runif(10*100*100*100*1, 0, 1)
train_x <- array_reshape(train_x, dim = c(10, 100, 100, 100, 1))
# Outcome data, binary
train_y <- to_categorical(sample(c(0,1), size = 10, replace = TRUE))
The dimensions of the train_x is 10 100 100 100 1. When we specify the input_shape to a keras model, we leave off the first dimension, which is assumed to be the samples dimension (number of subjects/samples).
So next we can build a model with only a single convolution layer (10 kernels), flatten it, and have it lead to the final layer which predicts a binary outcome (0 or 1).
model <- keras_model_sequential()
model %>%
layer_conv_3d(filters = 10,
kernel_size = c(3,3,3),
input_shape = c(100, 100, 100, 1),
data_format = 'channels_last') %>%
layer_flatten() %>%
layer_dense(units = 2, activation = 'softmax')
model %>%
compile(loss = 'binary_crossentropy',
optimizer = 'adam',
metrics = 'accuracy')
# Fit model
history <-
model %>%
fit(train_x,
train_y,
epoch = 200,
batch_size = 32,
validation_split = 0.2)
This runs for me. So I think you need to reshape your data. Take it from a list of arrays to an array with 5 dimensions.