# why does R choose the negative one from multiple available solutions?

here is a matrix

`A = matrix(c(1,1,2,2,0,0), nrow = 3,byrow = TRUE)`

this code is to find the svd for it

``````>svd(A)
\$d
[1] 3.162278e+00 1.570092e-16

\$u
[,1]       [,2]
[1,] -0.4472136 -0.8944272
[2,] -0.8944272  0.4472136
[3,]  0.0000000  0.0000000

\$v
[,1]       [,2]
[1,] -0.7071068 -0.7071068
[2,] -0.7071068  0.7071068
``````

for matrix A, both

``````\$u
[,1]       [,2]
[1,] -0.4472136 -0.8944272
[2,] -0.8944272  0.4472136
[3,]  0.0000000  0.0000000
``````

and

``````\$u
[,1]       [,2]
[1,] 0.4472136  0.8944272
[2,] 0.8944272 -0.4472136
[3,] 0.0000000  0.0000000
``````

are correct solution.

the question is: why does R choose the negative one?

for another case, R choose the positive one.

``````A = matrix(c(5,5,-1,7), nrow = 2,byrow = TRUE)
\$u
[,1]       [,2]
[1,] 0.7071068  0.7071068
[2,] 0.7071068 -0.7071068
``````

The question is not clear to me.

What do you mean by this? How do you tell whether a matrix or a vector is positive or negative?

This happens because of the calculation of the eigenvectors, and that can lead to results with varying signs. In any case, multiplying by (-1) has no effect. What matters is the direction of the eigenvector, and that doesn't get altered if scaled by (-1).

You can find these relevant:

1. Singular Value Decomposition (Check this section: Singular Value Decomposition Step-by-Step)

2. Sign of eigenvectors change depending on specification of the symmetric argument for symmetric matrices

thanks for your reply, I know the procedure of SVD and I said

I just want to know the logic of putting negative or positive sign on the eigenvectors

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.