I have a list of tibbles (about 200 50 x 3 tibbles) that has huge memory size (over 2 GB).
I did some detective work, and I find that these tibbles have a lot of information in the na.action attribute. A vector of about 700,000 rows while the tibble itself is only 50 rows.
In other words, the data in the tibble is about 1 KB but the na.action attribute is about 45 MB.
Is there a way to clear this out? How did this happen?
Here is the str() resut:
tibble [50 x 3] (S3: tbl_df/tbl/data.frame)
$ ts : POSIXct[1:50], format: "2020-10-12 16:06:00" "2020-10-12 16:08:00" "2020-10-12 16:10:00" "2020-10-12 16:12:00" ...
$ fs_flow : num [1:50] -0.0273 -0.0265 -0.0257 -0.0249 -0.0241 ...
$ viscosity: num [1:50] 466 465 464 463 462 ...
- attr(*, "na.action")= 'omit' Named int [1:696797] 26303 26304 26305 26306 26307 26308 26309 26310 26311 26312 ...
..- attr(*, "names")= chr [1:696797] "26303" "26304" "26305" "26306" ...