The p-value (0.008154) in the bottom row of the summary table is the p-value for the F-statistic (2.978). The F statistic is a ratio of the variance explained by the regression model relative to a model with just the intercept and no other variables. The p-value is the probability of achieving an F statistic that large under the null hypothesis that your regression model is no better than a model with just the intercept.
In this case, the result means that even though there are no variables in the model that are individually statistically significant, the model overall provides a statistically significantly better fit to the data than a model with just the intercept.
You can also perform an F test to compare several models to see if adding one or more additional variables results in a statistically significantly improved fit. For example, in your case you could do*:
m1 = lm(V1_IGF2_Result ~ SCREEN_Weight + Sex + ETH, data=Cohort1)
m2 = lm(V1_IGF2_Result ~ SCREEN_Weight + Sex , data=Cohort1)
m3 = lm(V1_IGF2_Result ~ 1, data=Cohort1)
anova(m3, m2, m1, test="F")
Here is a description of how to calculate the F statistic. Below is an example of calculating the F-statistic in R. Note that the value calculated is the same as the value returned by summary(m1). The values 29 and 31 are the degrees of freedom (df) for models m1 and m2, respectively. df is the number of observations minus the number of parameters (regression coefficients) estimated by the model.
m1 = lm(mpg ~ hp + wt, data=mtcars)
m2 = lm(mpg ~ 1, data=mtcars)
summary(m1)
summary(m2)
ssrM1 = sum(resid(m1)^2)
ssrM2 = sum(resid(m2)^2)
F_statistic = ((ssrM2 - ssrM1)/(31 - 29)) / (ssrM1/29)
* Note that the model formulas include only the variable names, while the data frame is entered in the data argument. The model should be specified this way, rather than by including the data frame name with each variable in the model.