I have a y variable that contains 160 000 collums that I am going to use in mediator analysis. How can I speed thing up? Is it possible to use vectorisation like in numpy? Where I can use these commands,
R is inherently vectorised.
Has your code been particularily slow?
Your question reads as though you are anticipating problems before encountering them.
I found it confusing that you say you have a variable with 160000 columns. Do you mean that you have a dataframe? Or did you mean values/entries rather than columns?
I have 160 000 values for cortical thickness, cortical areas for each persons that I wished to use as depend variable. I have read the data from freesurfer and it is a dataframe with these dim(df$x)
[1] 163842 1000, and I have age,sex, group for the 1000 persons.
and is there some column/variable in the dataframe that is a signifier of some treatment ( or absence of treatment), that would take the place of vlbw in the example code you shared?
This is neuroimaging where I perform 160 000 statistical testing with vlbw=group and IQ. I use a for loop to go through each testing. Y is values for 160 000 points on the left part of the brain.
ok. Its hard for me to know what you know from what you don't know in R...
What's the most specific question you would like some support with relating to this issue ?
What I basically want to do in reverse. If I have a x vector with number from 1:16000
I can do this in Python y=x.^2. How can i apply lm on all y using single instruction multiple data which is also done in Keras.