Sparsity and Dimension Reduction
草啊,我搞错把第一页笔记扔了 ( ̄艸 ̄)
Dimension Reduction¶
So far we’ve talked about what constraints the number of iterations we have to run for an optimization algorithm to converge. However, when we really talk about the running time, we will multiply the number of iterations by the time of doing one gradient update step. Take the case of Gradient Descent:
This is the same for other algorithms too. Therefore, to speed up computation, we also want to make small. So we introduce dimension reduction. That is, we want to transform our original problem in dimension into one that has a smaller dimension
Usually, dimensionality reduction is done using unsupervised learning (learning without using labels ) Some examples are:
Random projection: choose a -dimensional linear subspace at random and project onto it.
Principal Component Analysis: find the components of the data with the largest variance (represent the data the best), keep those, and throw out all the other components.
Autoencoders: learn a lower-dimension version of the data using a neural network
LDA, t-SNE, word embeddings...
Random Projection - JL Transform¶
This is also called the Johnson-Lindenstrauss transform or just JL transform because it is a result of Johnson-Lindenstrauss lemma.
For an arbitrary error tolerance , for arbitrary points , for any target dimension , there exists a linear map / matrix s.t.
This property is so nice exactly because it doesn’t depend on the original dimension .
Finding A¶
How can we find this matrix though? We will use a random matrix where each entry is drawn i.i.d. from the normal distribution
For this random matrix , calculate the expected value of two point’s distance after transformation: (note is the only thing random here. are just fixed constants)
From this derivation, we see that as long as we set , we are guaranteed to draw an that in expectation meets requirements.
A Caveat¶
While using Gaussian random variables is sufficient, it’s not very computationally efficient to generate the matrix A, communicate it, and multiply by it. There are other methods that achieve the same effect, so if you want to use random projection at scale, you should consider using these methods.
Autoencoder¶
We feed our data point into an encoder to get a latent vector , and feed this latent vector to a decoder to try to restore the original data point . The training loss is just the mean squared reconstruction loss.
This is a classic example of unsupervised learning where you don’t have label associated with each training input.
When we use an autoencoder for dimension compression, after training, we just throw away the decoder and just use the encoder.
Autoencoder has no theoretical guarantee of why it works whatsoever.
Christopher De Sa
Sparsity¶
Previously with dimension reduction, we deal with high computational cost brought by high dimension through reducing to a lower dimension. With sparsity, we deal with it by working at the same dimension but with a lower cost. Define:
Some common ways to make a model sparse:
- L1 regularization
Sparse Vector¶
If we have a sparse vector, we can represent it using a (non-zero-index, non-zero-value) pair
In computer, it’ll be just two arrays
- indices = [4, 7]
- values = [2.8, 3.7]
In this way, we have transformed a vector of 8 float into one length 2 integer array and one length 2 float array.
Sparse Matrix - COO¶
To store a sparse matrix, we use Coordinate List (COO), where we just add another column indices array:
In computer, it’ll be three arrays:
- row = [1, 1, 2]
- column = [3, 6, 2]
- values = [5.4, 3.6, 1.7]
Here each number in matrix is represented by 3 numbers in sparse representation. So we need at least 1/3 density in the original matrix to get memory benefit. In fact, with all the memory overheads, we usually need 1/4 density to get memory benefit.
However, this representation is hard to compute: we notice that there is no particular order of the values. The only thing to locate a value is to go through this sparse representation. Therefore, counting this computational overheads, we need 2% density to get real benefit.
Sparse Matrix Improved - CSR¶
So we have an improved version go by Compressed Sparse Row (CSR). It basically stores each row as a sparse vector and concatenate them together, keeping track of the number of non-zero values in each row for quicker lookup using row offset.
- Store rows as sparse vectors:
- indices: [3, 6] [2]
- values: [5.4, 3.6] [1.7]
- Concat these row vectors with row offset:
- row_offsets = [1, 3]
- (column) indices = [3, 6, 2]
- values = [5.4, 3.6, 1.7]
For consistency with matrix index, assume these arrays start with 1. s = row_offsets[i]
means that the i-th row starts from the s-th element in indices-values pair (and end at the row_offsets[i+1]
-th element in the indices-values pair)
For example, if we want to know what’s in row 1, we look up row_offsets[1] = 1, row_offsets[1+1] = 3
so row 1 starts at the 1st element in indices-values pair and the 3rd element is the first element in the second row, so row 1 is just the 1st and 2nd element in the indices-values pair. So we look at the 1st and 2nd elements and found them to be (3, 5.4) and (6, 3.6). That is in the 1st row, the 3rd column has a non-zero value 5.4 and the 6th column has a non-zero value 3.6
This representation format allows faster lookup and computation.