It looks like your code grows the output vectors each time through the loop, rather than allocating vectors of the required size up front. This is the second circle of hell in the R Inferno and is one factor that's probably slowing down the processing.
Instead, you could declare objects of the required size up front. For example:
x = rep(NA_real_, ncol(combn(1580,2)))
Here's another approach that uses pmap to reduce the amount of code needed for the iteration:
library(tidyverse)
# Fake data
set.seed(2)
dd = matrix(rnorm(1580*37), ncol=37)
# Get row pairs
row.pairs = t(combn(1580, 2)) %>% as.data.frame()
# Run cor.test on every pair of rows and store in a list
cor.list = row.pairs %>%
pmap(~ cor.test(dd[.x,],dd[.y, ]))
# Name list elements for the row pairs that were tested
names(cor.list) = apply(row.pairs, 1, paste, collapse="-")
Or, if you want to return a data frame with just the estimate and the p-value:
cor.df = map2_df(row.pairs[,1], row.pairs[,2],
function(a, b) {
ct = cor.test(dd[a, ], dd[b, ])
data.frame(Sample_1=a,
Sample_2=b,
estimate=ct$estimate,
p.value=ct$p.value)
})
You're doing more than 1.2 million tests, so it's probably going to take a while in any case. On my new Macbook Pro the pmap approach took about 2.5 minutes.
Regardless of the how this is coded, does it make analytical sense to run so many significance tests? It seems likely that you'd have serious multiple testing issues, a lot of false positives, and a high risk of many Type M and Type S errors.