YAN's BLOG

Bayesian Methods EM

2018-12-31

Expectation-maximization algorithm

In this assignment, we will derive and implement formulas for Gaussian Mixture Model — one of the most commonly used methods for performing soft clustering of the data.

Installation

We will need

```scikit-learn```, ```matplotlib``` libraries for this assignment
1
2
3
4
5
6
7
8
9
10


```python
import numpy as np
from numpy.linalg import slogdet, det, solve
import matplotlib.pyplot as plt
import time
from sklearn.datasets import load_digits
from grader import Grader
%matplotlib inline

Grading

We will create a grader instance below and use it to collect your answers. Note that these outputs will be stored locally inside grader and will be uploaded to the platform only after running submitting function in the last part of this assignment. If you want to make a partial submission, you can run that cell anytime you want.

1
grader = Grader()

Implementing EM for GMM

For debugging we will use samples from gaussian mixture model with unknown mean, variance and priors. We also added inital values of parameters for grading purposes.

1
2
3
4
5
6
7
8
9
10
11
samples = np.load('samples.npz')
X = samples['data']
pi0 = samples['pi0']
mu0 = samples['mu0']
sigma0 = samples['sigma0']
plt.scatter(X[:, 0], X[:, 1], c='grey', s=30)
plt.axis('equal')
plt.show()
print(pi0)
print(mu0)
print(sigma0)

png

[0.3451814  0.6066179  0.04820071]
[[-0.71336192  0.90635089]
 [ 0.76623673  0.82605407]
 [-1.32368279 -1.75244452]]
[[[ 1.00490413  1.89980228]
  [ 1.89980228  4.18354574]]

 [[ 1.96867815  0.78415336]
  [ 0.78415336  1.83319942]]

 [[ 0.19316335 -0.11648642]
  [-0.11648642  1.98395967]]]

Reminder

Remember, that EM algorithm is a coordinate descent optimization of variational lower bound $\mathcal{L}(\theta, q) = \int q(T) \log\frac{P(X, T|\theta)}{q(T)}dT\to \max$.

E-step:

$\mathcal{L}(\theta, q) \to \max\limits_{q} \Leftrightarrow \mathcal{KL} [q(T) \,|\, p(T|X, \theta)] \to \min \limits_{q\in Q} \Rightarrow q(T) = p(T|X, \theta)$

M-step:

$\mathcal{L}(\theta, q) \to \max\limits_{\theta} \Leftrightarrow \mathbb{E}{q(T)}\log p(X,T | \theta) \to \max\limits{\theta}$

For GMM, $\theta$ is a set of parameters that consists of mean vectors $\mu_c$, covariance matrices $\Sigma_c$ and priors $\pi_c$ for each component.

Latent variables $T$ are indices of components to which each data point is assigned. $T_i$ (cluster index for object $i$) is a binary vector with only one active bit in position corresponding to the true component. For example, if we have $C=3$ components and object $i$ lies in first component, $T_i = [1, 0, 0]$.

The joint distribution can be written as follows: $p(T, X \mid \theta) = \prod\limits_{i=1}^N p(T_i, X_i \mid \theta) = \prod\limits_{i=1}^N \prod\limits_{c=1}^C [\pi_c \mathcal{N}(X_i \mid \mu_c, \Sigma_c)]^{T_{ic}}$.

E-step

In this step we need to estimate the posterior distribution over the latent variables with fixed values of parameters: $q(T) = p(T|X, \theta)$. We will assume that $T_i$ (cluster index for object $i$) is a binary vector with only one ‘1’ in position corresponding to the true component. To do so we need to compute $\gamma_{ic} = P(T_{ic} = 1 \mid X, \theta)$. Note that $\sum\limits_{c=1}^C\gamma_{ic}=1$.

Important trick 1: It is important to avoid numerical errors. At some point you will have to compute the formula of the following form: $\frac{e^{x_i}}{\sum_j e^{x_j}}$. When you compute exponents of large numbers, you get huge numerical errors (some numbers will simply become infinity). You can avoid this by dividing numerator and denominator by $e^{\max(x)}$: $\frac{e^{x_i-\max(x)}}{\sum_j e^{x_j - \max(x)}}$. After this transformation maximum value in the denominator will be equal to one. All other terms will contribute smaller values. This trick is called log-sum-exp. So, to compute desired formula you first subtract maximum value from each component in vector $X$ and then compute everything else as before.

Important trick 2: You will probably need to compute formula of the form $A^{-1}x$ at some point. You would normally inverse $A$ and then multiply it by $x$. A bit faster and more numerically accurate way to do this is to solve the equation $Ay = x$. Its solution is $y=A^{-1}x$, but the equation $Ay = x$ can be solved by Gaussian elimination procedure. You can use

for this.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

<b>Other usefull functions: </b> <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.slogdet.html">```slogdet```</a> and <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.det.html#numpy.linalg.det">```det```</a>

<b>Task 1:</b> Implement E-step for GMM using template below.


```python
from scipy.stats import multivariate_normal
def E_step(X, pi, mu, sigma):
"""
Performs E-step on GMM model
Each input is numpy array:
X: (N x d), data points
pi: (C), mixture component weights
mu: (C x d), mixture component means
sigma: (C x d x d), mixture component covariance matrices

Returns:
gamma: (N x C), probabilities of clusters for objects
"""
N = X.shape[0] # number of objects
C = pi.shape[0] # number of clusters
d = mu.shape[1] # dimension of each object
gamma = np.zeros((N, C)) # distribution q(T)

### YOUR CODE HERE
pX_given_t = np.zeros((N, C))
for c in range(C):
model = multivariate_normal(mean=mu[c, :], cov=sigma[c, :])
pX_given_t[:, c] = model.pdf(X)

pX_given_t *= pi
gamma = pX_given_t / np.sum(pX_given_t, axis=1, keepdims=True)

return gamma

1
2
3
gamma = E_step(X, pi0, mu0, sigma0)
print(gamma.shape)
grader.submit_e_step(gamma)
(280, 3)
Current answer for task Task 1 (E-step) is: 0.5337178741081263

M-step

In M-step we need to maximize $\mathbb{E}{q(T)}\log p(X,T | \theta)$ with respect to $\theta$. In our model this means that we need to find optimal values of $\pi$, $\mu$, $\Sigma$. To do so, you need to compute the derivatives and
set them to zero. You should start by deriving formulas for $\mu$ as it is the easiest part. Then move on to $\Sigma$. Here it is crucial to optimize function w.r.t. to $\Lambda = \Sigma^{-1}$ and then inverse obtained result. Finaly, to compute $\pi$, you will need Lagrange Multipliers technique to satisfy constraint $\sum\limits
{i=1}^{n}\pi_i = 1$.



Important note: You will need to compute derivatives of scalars with respect to matrices. To refresh this technique from previous courses, see wiki article about it . Main formulas of matrix derivatives can be found in Chapter 2 of The Matrix Cookbook. For example, there you may find that $\frac{\partial}{\partial A}\log |A| = A^{-T}$.

Task 2: Implement M-step for GMM using template below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
def M_step(X, gamma):
"""
Performs M-step on GMM model
Each input is numpy array:
X: (N x d), data points
gamma: (N x C), distribution q(T)

Returns:
pi: (C)
mu: (C x d)
sigma: (C x d x d)
"""
N = X.shape[0] # number of objects
C = gamma.shape[1] # number of clusters
d = X.shape[1] # dimension of each object

### YOUR CODE HERE
pi = np.zeros(C)
mu = np.zeros((C, d))
sigma = np.zeros((C, d, d))
for c in range(C):
p_posterior_t = np.sum(gamma[:, c])
mu[c, :] = np.sum(gamma[:, c].reshape(N,1) * X, axis=0) / p_posterior_t
sigma[c, :] = np.sum([gamma[i,c] * np.outer(X[i,:] - mu[c,:], X[i,:] - mu[c,:]) for i in range(N)], axis=0) / p_posterior_t
pi[c] = p_posterior_t / N
return pi, mu, sigma
1
2
3
gamma = E_step(X, pi0, mu0, sigma0)
pi, mu, sigma = M_step(X, gamma)
grader.submit_m_step(pi, mu, sigma)
Current answer for task Task 2 (M-step: mu) is: 2.899391882050384
Current answer for task Task 2 (M-step: sigma) is: 5.9771052168975265
Current answer for task Task 2 (M-step: pi) is: 0.5507624459218775

Loss function

Finally, we need some function to track convergence. We will use variational lower bound $\mathcal{L}$ for this purpose. We will stop our EM iterations when $\mathcal{L}$ will saturate. Usually, you will need only about 10-20 iterations to converge. It is also useful to check that this function never decreases during training. If it does, you have a bug in your code.

Task 3: Implement a function that will compute $\mathcal{L}$ using template below.

$$\mathcal{L} = \sum_{n=1}^{N} \sum_{k=1}^{K} \mathbb{E}[z_{n, k}] (\log \pi_k + \log \mathcal{N}(x_n | \mu_k, \sigma_k)) - \sum_{n=1}^{N} \sum_{k=1}^{K} \mathbb{E}[z_{n, k}] \log \mathbb{E}[z_{n, k}]$$

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
def compute_vlb(X, pi, mu, sigma, gamma):
"""
Each input is numpy array:
X: (N x d), data points
gamma: (N x C), distribution q(T)
pi: (C)
mu: (C x d)
sigma: (C x d x d)

Returns value of variational lower bound
"""
N = X.shape[0] # number of objects
C = gamma.shape[1] # number of clusters
d = X.shape[1] # dimension of each object

### YOUR CODE HERE
loss = 0
small = 0
for c in range(C):
model = multivariate_normal(mu[c], sigma[c], allow_singular=True)
for n in range(N):
loss += gamma[n, c]*(np.log(pi[c] + small) + model.logpdf(X[n, :]) - np.log(gamma[n,c] + 0.1*small))

return loss
1
2
3
4
5
pi, mu, sigma = pi0, mu0, sigma0
gamma = E_step(X, pi, mu, sigma)
pi, mu, sigma = M_step(X, gamma)
loss = compute_vlb(X, pi, mu, sigma, gamma)
grader.submit_VLB(loss)
Current answer for task Task 3 (VLB) is: -1213.973464306017

Bringing it all together

Now that we have E step, M step and VLB, we can implement training loop. We will start at random values of $\pi$, $\mu$ and $\Sigma$, train until $\mathcal{L}$ stops changing and return the resulting points. We also know that EM algorithm sometimes stops at local optima. To avoid this we should restart algorithm multiple times from different starting positions. Each training trial should stop either when maximum number of iterations is reached or when relative improvement is smaller than given tolerance ($|\frac{\mathcal{L}i-\mathcal{L}{i-1}}{\mathcal{L}_{i-1}}| \le \text{rtol}$).

Remember, that values of $\pi$ that you generate must be non-negative and sum up to 1. Also, $\Sigma$ matrices must be symmetric and positive semi-definite. If you don’t know how to generate those matrices, you can use $\Sigma=I$ as initialization.

You will also sometimes get numerical errors because of component collapsing. The easiest way to deal with this problems is to simply restart the procedure.

Task 4: Implement training procedure

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
import math
def train_EM(X, C, rtol=1e-3, max_iter=100, restarts=10):
'''
Starts with random initialization *restarts* times
Runs optimization until saturation with *rtol* reached
or *max_iter* iterations were made.

X: (N, d), data points
C: int, number of clusters
'''
N = X.shape[0] # number of objects
d = X.shape[1] # dimension of each object
best_loss = -999999
best_pi = None
best_mu = None
best_sigma = None

mu = np.zeros((C,d))
for _ in range(restarts):
try:
### YOUR CODE HERE
pi = np.array([1.0/C]*C,dtype=np.float32)
# pi = np.array([0.35,0.35,0.3])
#mu = np.random.rand(C, d)
mu[0,:] = np.array([1,1])
mu[1,:] = np.array([1,6])
mu[2,:] = np.array([7,4])
#sigma_ = np.random.rand(C, d, d)
#sigma = np.array([np.dot(A, A.T) for A in sigma_])
sigma = np.array([np.identity(d)] * C)
prev_loss = None
for i in range(max_iter):
gamma = E_step(X, pi, mu, sigma)
pi, mu, sigma = M_step(X, gamma)
pi = pi / np.sum(pi)
loss = compute_vlb(X, pi, mu, sigma, gamma)
if not math.isnan(loss) and loss > best_loss:
best_loss = loss
best_mu = mu
best_pi = pi
best_sigma = sigma
#print("Iteration {}, loss: {}".format(i, loss))

if prev_loss is not None:
diff = np.abs(loss - prev_loss)
if diff < rtol:
break
prev_loss = loss
except np.linalg.LinAlgError:
print("Singular matrix: components collapsed")
continue

return best_loss, best_pi, best_mu, best_sigma
1
2
best_loss, best_pi, best_mu, best_sigma = train_EM(X, 3)
grader.submit_EM(best_loss)
Current answer for task Task 4 (EM) is: -1063.811767605055

If you implemented all the steps correctly, your algorithm should converge in about 20 iterations. Let’s plot the clusters to see it. We will assign a cluster label as the most probable cluster index. This can be found using matrix $\gamma$ computed on last E-step.

1
2
3
4
5
gamma = E_step(X, best_pi, best_mu, best_sigma)
labels = gamma.argmax(1)
plt.scatter(X[:, 0], X[:, 1], c=labels, s=30)
plt.axis('equal')
plt.show()

png

Authorization & Submission

To submit assignment parts to Cousera platform, please, enter your e-mail and your token into variables below. You can generate the token on this programming assignment page. Note: Token expires 30 minutes after generation.

1
2
3
STUDENT_EMAIL = ''
STUDENT_TOKEN = ''
grader.status()
You want to submit these numbers:
Task Task 1 (E-step): 0.5337178741081263
Task Task 2 (M-step: mu): 2.899391882050384
Task Task 2 (M-step: sigma): 5.9771052168975265
Task Task 2 (M-step: pi): 0.5507624459218775
Task Task 3 (VLB): -1213.973464306017
Task Task 4 (EM): -1063.811767605055

If you want to submit these answers, run cell below

1
grader.submit(STUDENT_EMAIL, STUDENT_TOKEN)
You used an invalid email or your token may have expired. Please make sure you have entered all fields correctly. Try generating a new token if the issue still persists.