Let \( \alpha_1, \dots, \alpha_n \) be data points in \( \mathbb{R}^m \). What is the objective of the best approximating subspace problem?
Consider the data points \( \alpha_1 = (-2,2) \) and \( \alpha_2 = (3,-3) \). For \( k=1 \), what is the solution of the best approximating subspace problem?
Let \( A \in \mathbb{R}^{n \times m} \) with rows \( \alpha_i^T \), \( i=1,\dots,n \). A solution to the best approximating subspace problem is obtained by solving:
What is the significance of the leading right singular vectors in the SVD?
In the singular value decomposition \( A = \sum_{j=1}^r \sigma_j u_j v_j^T \), the vectors \( u_j \) are called:
Which of the following is true about the SVD of a matrix \( A \)?
What is the difference between a compact SVD and a full SVD?
Let \( A = U \Sigma V^T \) be an SVD of \( A \). Which of the following is true?
Which of the following is NOT a property of the matrix \( A^TA \) in the context of the SVD?
The columns of \( U \) in the compact SVD form an orthonormal basis for:
Let \( A \in \mathbb{R}^{n \times m} \) have compact SVD \( A = U_1 \Sigma_1 V_1^T \). To obtain a full SVD, we complete \( U_1 \) to an orthonormal basis \( U = (U_1 \ U_2) \) of \( \mathbb{R}^n \) and \( V_1 \) to an orthonormal basis \( V = (V_1 \ V_2) \) of \( \mathbb{R}^m \). Then the columns of \( U_2 \) form an orthonormal basis of: