Thanks, I appreciate the quick response, and I think I understand the approach you are suggesting, and will respond to that here. I am not sure exactly what is the difference between rds and simple .dat files that are saved by the "save" command, but I can find that later, but I doubt that this difference will affect the high level discussions.
First, to clarify:
- I am using R projects all the time. I almost never open a file that does not belong to an R project, and if there is a file in one project that I need to source in another, I either source using the whole path, or create a local soft link in the home directory of my project --- this is all under the assumption that a file is not being modified by two projects, which is a good practice, I think.
- I do store subset of data structures in separate .dat file. In fact, I sometimes creates an environment where I store related object and saves the environment as a .dat file (well, I guess you can see the environment as a just a list that I store, but there are different in various ways).
- I understand the problem of putting all of my objects and state in one file (the .RDate file) that is associated with a project. First, the file becomes huge, and second, it is often the case that you don't need all of the data that you created at the same time, so saving them in a hierarchy and loading what you need when you need it makes sense and also reduces the memory footprint of the workspace.
But, that being said, at least the way I see it, the fact that R is an interpreter that you can use to experiment with various ways to look at data, is a key difference between that and other programming languages, where you write a program, generate output into a file, then write another variant, etc. The ability to quickly explore different views of the data "on the fly", query various properties, and save, even temporarily, those that might be useful, is a very powerful and and important aspect of working with R, in my opinion. It is very often the case that I hold 5 different representation of the data, for example some with additional columns in the data frame, some a shorter summary of it, etc. Once classic example is a data frame and a melted variant of it, where the melted form is practically only used for charting, and the regular form is by all other functions.
But this temporal experimenting with various views of the data creates a lot of temporary variables, that I repeatedly use in the process of developing a solid, well formed code that will generate and store the data in the best form that is useful to me. I just don't know upfront what it would be, and I'm using the interpreter to help me figure it out. Then at some point I do a "cleanup", but if I were to save every temporary data frame that is created and I need to save for a day or two into a .dat (or .rds) file, the process of exploring data and coding to work with it would be slowed down by a huge factor --- especially when dealing with huge data structures. (Lazy evaluation allows working with multiple representation of huge data frames very efficiently.) But if I need to put into persistent state (e.g., disk) every local state that I maintain, I am losing a lot of the efficiency of quick exploration and resulted coding that the R interpreter allows.
The above is also true with code, but for much lesser extent, as the code in R often revolve around the data, and the efficient way to explore and manipulate it. Only after a long exploration of different approaches I converge to the form of the data and coding to work with it that I'd like to keep. The interpreter supports a very efficient experimental phase until I converge to a state worth saving. And of course I am also use git to store anything that is not "half-baked".
So I don't think that I disagree with any of the aspects and principals that you brought up in the blog, and I have many years of experiment in designing and writing, real, long living code, in various languages. I also developed some R libraries. But at least for me, the big difference with R vs using C, C++, or any other language, is the ability for quick exploration and trying different approaches that feeds the developing of a solid form code and data representation.
How all of that relates to .RData and automatic saving to it? well, for me .RData is a temporary snapshot for the temporary state of my project, and of the ongoing exploration state. Probably one of the primary examples are exploring using different graphical representation of the code, where different representation require different form of the data structure (if you want it to be efficient); while I eventually end up with one or two charts, it takes dozens of them for temporary analysis of the data, until you converge to the "right" one. But this temporary state could last for quite a long time, so I do need to have a persistent storage for it, at least on a daily basis.
So I think that it all boils down to storing a temporal, experimental state, vs. a final, much smaller state that should probably be distributed into separate data files. What I should probably do is putting myself a reminder to run save.image() at the end of each day, to save the temporary state, but I do agree with you that I often being "lazy" with not cleaning up very old temporal state and just store it in some .dat file (or .rds, I will find the difference).
Please let me know if I misunderstood you, the approach that you mention in the blog, and/or you think that I'm simply "out for lunch" 
Thanks again.