My favorite project wasn't flashy, but it had many moments of "I can't believe that just worked". It was essentially multi-label classification, but the customer requirements effectively mandated a single decision tree (not random forest or other fancy ensemble methods, nor any of the methods that transform multi-label classification into binary classification), and the only predictors were highly categorical (hundreds of possible non-ordinal values for at least one of the few predictors). I used the rpart package, but got deep into custom splitting functions.
The first time I ran my custom evaluation function against the ordering algorithm that I used to turn a binary ~300-dimension output into a single one-dimensional scale, and the result actually showed something that looked like a single hill (as it needed to), I was giddy. Though it's hard to be giddy when explaining why to someone requires a solid 20 minute lecture.
Interestingly, I learned very little additional about R from the project, other than the fact that the "Profile Selected Lines" command is awesome when you actually need to optimize your code. But that's often true with more complex projects -- I don't have time to learn any more than is strictly necessary at that moment. It's reading things like this forum that makes me realize everything I did "wrong" at the time.