In the power iteration lemma for the positive semidefinite case, what happens when the initial vector \( x \) satisfies \( \langle q_1, x \rangle < 0 \)?
What is the relationship between the matrix \( B \) and the matrix \( A \) in the power iteration lemma for the SVD case?
If \( \sigma_1 > \sigma_2 > 0 \) are the top two singular values of a matrix \( A \), and \( k \) is a large positive integer, which of the following approximations is used in the power iteration method?
In the power iteration lemma for the SVD case, what is the convergence result for a random vector \( x \)?
Suppose you apply the power iteration method to a matrix \( A \) and obtain a vector \( v \). How can you compute the corresponding singular value \( \sigma \) and left singular vector \( u \)?
What is required for the initial vector \( x \) in the power iteration method to ensure convergence to the top eigenvector?
What is the probability that a random \( m \)-dimensional spherical Gaussian vector \( X \) with mean \( 0 \) and variance \( 1 \) satisfies \( \langle v_1, X \rangle = 0 \)?
In the orthogonal iteration method for computing multiple singular vectors, what is done after each application of \( B \) to the subspace?
What is the main advantage of projecting data points onto the top right singular vectors before clustering?
What does the truncated SVD \( Z = U_{(2)} \Sigma_{(2)} V_{(2)}^T \) correspond to?