Machine learning on big datasets is memory intensive, I don't think you are going to get meaningful memory allocation reductions regardless of what package you use.
I found this Python package for random forest that is supposed to partition the process so it can fit in less memory but it is just an implementation for a paper so it is not well documented, tested or even maintained.
Most ML tools for practical applications assume large computational resources available because it's what makes sense for real world scenarios.
I would say you should test things with a smaller subset (sample) of your data and use Cloud Computing for training the final model if needed or worthy.