We definitely haven't landed on anything or discussed enough to have a firm proposal here. Just an idea. For instance, let's imagine you have an api at /myapi that gets 100 requests/second. Rather than write 100 records to the database that look like:
path, requesttime, duration, client
/myapi, 193051351351, 10ms, cole
We might write one record that looks like:
path, requesttime, n, min, median, max, mean, client
/myapi, 193051351351, 100, 8ms, 10ms, 30ms, 12ms, cole
Any initial thoughts on that idea? My inner data scientist always loathes to lose the granular data, but we could probably learn from tools that already do this type of monitoring to see how they aggregate / etc. (one per second is still ~ 31.5 million records per year!
)
Great to hear!! I will pass it along to those who worked on it! 
Exciting stuff! I'm looking forward to hearing how it goes!