I couldn't find an answer to my question (maybe I don't know how to specifically search for this issue), so I'll try to explain as best as I can.
The thing is that I have data that came from multiples files, structured somewhat like:
- institution
- date (month and year)
- table-1
- table-...
- table-n
- date (month and year)
The data is a compilation of demographic data, so to exactly each row, it's not precisely "an observation", more like a summary of data like:
[[12]]
# A tibble: 16 x 15
tipo_atencion profesional `6-9` `10-14` `15-19` `20-24` `25-34` `35-44` `45-54` `55-64` `65_y_mas` hombres mujeres mes año
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr> <int>
1 pre-concepcional medico 0 0 0 0 0 0 0 0 0 0 0 enero 2012
2 pre-concepcional matrona/on 0 0 0 0 0 0 0 0 0 0 0 enero 2012
3 prenatal medico 0 0 0 0 0 0 0 0 0 0 0 enero 2012
4 prenatal matrona/on 0 0 11 19 29 9 0 0 0 0 68 enero 2012
.....
After loading and preparing the data I found myself with a list of 12 tibbles (one for each table), and when I bind the rows, I go from 16 distinct rows to 192 (I have from 2012 to 2018...).
I have been thinking that maybe using R isn't the best way to structure the data and I should be using a relational database to avoid repetition, since I'm starting to feel this isn't going to scale well.
ps.- I couldn't think a best way to put the title more clear, so if somebody could suggest a better idea, I'll change it asap.