Are you concerned that the eta2 value is small but the p value of the ANOVA is also small? They are describing different things. eta2 describes how much of the variance in the dependent variable is accounted for by the independent variable. The p value describes the probability of seeing that result, (edit: or one more extreme) if the experiment were repeated many times and the null hypothesis were true. If both values are small, it means that not much of the variation in the dependent variable is explained but the result is probably not due to noise or "luck".
Here is an example from linear regression where p is small but so is R-squared. Only a little of the variance of y is explained by x (R-squared is small) but the effect is far beyond what you are likely to see if there were no relationship at all (p is small).
DF <- data.frame(x = seq(0,10, 0.01), y = seq(1,2, 0.001) + rnorm(1001, 0, 1))
summary(lm(y ~ x, data = DF))
#> lm(formula = y ~ x, data = DF)
#> Min 1Q Median 3Q Max
#> -3.0023 -0.6797 -0.0207 0.7050 3.8203
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 1.03291 0.06539 15.795 < 2e-16 ***
#> x 0.09132 0.01132 8.064 2.09e-15 ***
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> Residual standard error: 1.035 on 999 degrees of freedom
#> Multiple R-squared: 0.06112, Adjusted R-squared: 0.06018
#> F-statistic: 65.03 on 1 and 999 DF, p-value: 2.095e-15
abline(a = 1.03, b = 0.0913)
Created on 2020-05-02 by the reprex package (v0.3.0)