Thanks Sean! This is weird - the dataset is small (500 kB) and each model by itself is exceedingly small: the biggest ones have about 5000 parameters...even the smallest MobileNet architectures are hundreds of times bigger than this. But apparently, fitting them sequentially increases the memory occupation considerably. I don't understand how I could use ulimit here - its goal seems to set a limit on the memory occupation. However, I think it would be more useful for me to track how memory occupation grows in time, as I fit more models, so that I could choose to fit no more than, say, 300 models.
I wonder if I should ask a question in the Machine Learning and Modeling category - I'm curious to understand if/why Tensorflow is such a memory hog. The computational graph is not being grown progressively, with each fitted model - this was my first suspect, but the console output for each model clearly shows that the computational graph is being re-initialized for each new model, as it should:
> summary(model)
____________________________________________________________________________________________________
Layer (type) Output Shape Param #
====================================================================================================
dense_1 (Dense) (None, 128) 1536
____________________________________________________________________________________________________
dropout_1 (Dropout) (None, 128) 0
____________________________________________________________________________________________________
dense_2 (Dense) (None, 4) 516
====================================================================================================
Total params: 2,052
Trainable params: 2,052
Non-trainable params: 0
____________________________________________________________________________________________________