- 引言
- 回顾:高斯混合模型
- 模型求解:极大似然估计
- 场景描述
- 模型参数整理
- 求解过程
上一节介绍了高斯混合模型(Gaussian Mixture Model,GMM),本节将对高斯混合模型的模型参数进行求解。
回顾:高斯混合模型从概率生成模型的角度观察,概率模型 P ( X ) P(\mathcal X) P(X)的生成过程表示如下:
-
引入一个隐变量 Z \mathcal Z Z, Z \mathcal Z Z是一个基于参数的离散分布,假设该离散分布的数量为 K \mathcal K K个,该离散分布的 标签及对应概率分布 P ( Z ) P(\mathcal Z) P(Z) 表示如下:
Z \mathcal Z Z z 1 z_1 z1 z 2 z_2 z2 ⋯ \cdots ⋯ z K z_{\mathcal K} zK P ( Z ) P(\mathcal Z) P(Z) p 1 p_1 p1 p 2 p_2 p2 ⋯ \cdots ⋯ p K p_{\mathcal K} pK并满足: ∑ k = 1 K p k = 1 \sum_{k=1}^{\mathcal K} p_{k} = 1 k=1∑Kpk=1
-
任意 z j ∈ Z z_j \in \mathcal Z zj∈Z均唯一对应一个高斯分布 N ( μ j , Σ j ) \mathcal N(\mu_j,\Sigma_j) N(μj,Σj),因而共包含 K \mathcal K K个高斯分布:
Z \mathcal Z Z z 1 z_1 z1 z 2 z_2 z2 ⋯ \cdots ⋯ z K z_{\mathcal K} zK P ( x ∣ Z = z k ) P(x \mid \mathcal Z = z_k) P(x∣Z=zk) N ( μ 1 , Σ 1 ) \mathcal N(\mu_1,\Sigma_1) N(μ1,Σ1) N ( μ 2 , Σ 2 ) \mathcal N(\mu_2,\Sigma_2) N(μ2,Σ2) ⋯ \cdots ⋯ N ( μ K , Σ K ) \mathcal N(\mu_{\mathcal K},\Sigma_{\mathcal K}) N(μK,ΣK)数学符号表达即: P ( x ∣ z k ) ∼ N ( μ k , Σ k ) ( k = 1 , 2 , ⋯ , K ; x ∈ X ) P(x \mid z_k) \sim \mathcal N(\mu_k,\Sigma_k) \quad (k=1,2,\cdots,\mathcal K ; x \in \mathcal X) P(x∣zk)∼N(μk,Σk)(k=1,2,⋯,K;x∈X)
-
则任意样本 x ∈ X x \in \mathcal X x∈X的生成过程为: N ( x ∣ μ k , Σ k ) \mathcal N(x \mid \mu_k,\Sigma_k) N(x∣μk,Σk)
表示在高斯分布
N ( μ k , Σ k ) \mathcal N(\mu_k,\Sigma_k) N(μk,Σk)中随机生成一个样本点
x x x; P ( x ) = ∑ Z P ( x ∣ Z ) P ( Z ) = ∑ k = 1 K N ( x ∣ μ k , Σ k ) ⋅ p k P(x) = \sum_{\mathcal Z}P(x \mid \mathcal Z)P(\mathcal Z) = \sum_{k=1}^{\mathcal K}\mathcal N(x \mid \mu_k,\Sigma_k)\cdot p_k P(x)=Z∑P(x∣Z)P(Z)=k=1∑KN(x∣μk,Σk)⋅pk -
从而整个样本集合 X \mathcal X X的生成过程(即概率模型): P ( X ) = ∑ Z P ( X ∣ Z ) P ( Z ) = ∑ k = 1 K P ( X ∣ Z = z k ) P ( Z = z k ) = ∑ k = 1 K N ( X ∣ μ k , Σ k ) ⋅ p k ( ∑ k = 1 K p k = 1 ) \begin{aligned} P(\mathcal X) & = \sum_{\mathcal Z} P(\mathcal X \mid \mathcal Z)P(\mathcal Z) \\\ & = \sum_{k=1}^{\mathcal K} P(\mathcal X \mid \mathcal Z=z_k)P(\mathcal Z = z_k)\\ & = \sum_{k=1}^{\mathcal K}\mathcal N(\mathcal X \mid \mu_k,\Sigma_k)\cdot p_k \quad \left(\sum_{k=1}^{\mathcal K}p_k = 1 \right) \end{aligned} P(X) =Z∑P(X∣Z)P(Z)=k=1∑KP(X∣Z=zk)P(Z=zk)=k=1∑KN(X∣μk,Σk)⋅pk(k=1∑Kpk=1) 其中 N ( X ∣ μ k , Σ k ) \mathcal N(\mathcal X \mid \mu_k,\Sigma_k) N(X∣μk,Σk)表示从高斯分布 N ( μ k , Σ k ) \mathcal N(\mu_k,\Sigma_k) N(μk,Σk)中生成的样本 X \mathcal X X。
不同于高斯判别分析(Gaussain Discriminant Analysis,GDA)这样的监督学习的分类模型,高斯混合模型所处理的样本不包含标签信息。 因此,从数据集合的角度观察,它只包含样本信息。假设数据集合内共包含 N N N个样本,并假设数据集合中任意两个样本之间均属于 独立同分布(Independent and Identically Distributed,i.i.d.)关系。 X = { x ( 1 ) , x ( 2 ) , ⋯ , x ( N ) } x ( i ) ∼ i.i.d. x ( j ) ( x ( i ) , x ( j ) ∈ X ) \mathcal X = \{x^{(1)},x^{(2)},\cdots,x^{(N)}\} \\ x^{(i)} \overset{\text{i.i.d.}}{\sim} x^{(j)} \quad (x^{(i)},x^{(j)} \in \mathcal X) X={x(1),x(2),⋯,x(N)}x(i)∼i.i.d.x(j)(x(i),x(j)∈X) 基于样本集合的生成过程,每一个样本 x ( i ) x^{(i)} x(i)均对应一个隐变量 z ( i ) z^{(i)} z(i)。因而,从隐变量数量角度出发,隐变量 Z \mathcal Z Z表达如下: Z = { z ( 1 ) , z ( 2 ) , ⋯ , z ( N ) } \mathcal Z = \{z^{(1)},z^{(2)},\cdots,z^{(N)}\} Z={z(1),z(2),⋯,z(N)} 从隐变量的 维度角度出发,任意一个隐变量 z ( i ) ( i = 1 , 2 , ⋯ , N ) z^{(i)}(i=1,2,\cdots,N) z(i)(i=1,2,⋯,N)都存在 K \mathcal K K种选择。即: z ( i ) ∈ { z 1 , z 2 , ⋯ , z K } ( i = 1 , 2 , ⋯ , N ) z^{(i)} \in \{z_1,z_2,\cdots,z_{\mathcal K}\} \quad (i=1,2,\cdots,N) z(i)∈{z1,z2,⋯,zK}(i=1,2,⋯,N) 称 ( X , Z ) (\mathcal X,\mathcal Z) (X,Z)为完整数据(Complete Data)。
模型参数整理高斯混合模型求解本质上依然是求解概率模型 P ( X ∣ θ ) P(\mathcal X \mid \theta) P(X∣θ)的模型参数 θ \theta θ。首先想到的方法自然是极大似然估计(Maximum Likelihood Estimate,MLE)。
首先观察模型参数 θ \theta θ是什么? 从样本的生成过程来看,样本的生成一共经过两次随机:
- 从隐变量 Z \mathcal Z Z中随机选择一个 离散分布标签;
- 在离散分布标签确定的条件下,从离散分布标签对应的高斯分布 随机生成一个样本;
上述过程中,共存在两类参数:
- 选择离散分布标签的概率: p 1 , p 2 , ⋯ , p K ( ∑ k = 1 K p k = 1 ) p_1,p_2,\cdots,p_{\mathcal K} \quad (\sum_{k=1}^{\mathcal K} p_k = 1) p1,p2,⋯,pK(k=1∑Kpk=1)
- 各标签对应高斯分布的模型参数: ( μ 1 , Σ 1 ) , ( μ 2 , Σ 2 ) , ⋯ , ( μ K , Σ K ) (\mu_1,\Sigma_1),(\mu_2,\Sigma_2),\cdots,(\mu_{\mathcal K},\Sigma_{\mathcal K}) (μ1,Σ1),(μ2,Σ2),⋯,(μK,ΣK)
因此,模型参数 θ \theta θ可表示为一个参数集合: θ = { p 1 , p 2 , ⋯ , p K , μ 1 , μ 2 , ⋯ , μ K , Σ 1 , Σ 2 , ⋯ , Σ K } \theta = \{p_1,p_2,\cdots,p_{\mathcal K},\mu_1,\mu_2,\cdots,\mu_{\mathcal K},\Sigma_1,\Sigma_2,\cdots,\Sigma_{\mathcal K}\} θ={p1,p2,⋯,pK,μ1,μ2,⋯,μK,Σ1,Σ2,⋯,ΣK}
求解过程极大似然估计表示如下: 基于样本间‘独立同分布’,可进行如下展开:
θ
^
M
L
E
=
arg
max
θ
log
P
(
X
∣
θ
)
=
arg
max
θ
log
[
∏
i
=
1
N
P
(
x
(
i
)
∣
θ
)
]
=
arg
max
θ
∑
i
=
1
N
log
P
(
x
(
i
)
∣
θ
)
\begin{aligned} \hat \theta_{MLE} & = \mathop{\arg\max}\limits_{\theta} \log P(\mathcal X \mid \theta) \\ & = \mathop{\arg\max}\limits_{\theta} \log \left[\prod_{i=1}^{N} P(x^{(i)} \mid \theta)\right]\\ & = \mathop{\arg\max}\limits_{\theta} \sum_{i=1}^N \log P(x^{(i)} \mid \theta) \end{aligned}
θ^MLE=θargmaxlogP(X∣θ)=θargmaxlog[i=1∏NP(x(i)∣θ)]=θargmaxi=1∑NlogP(x(i)∣θ)
将 P ( x ( i ) ) P(x^{(i)}) P(x(i))看作样本 x ( i ) x^{(i)} x(i)在高斯混合模型中的生成过程,因此上式可转化为: θ ^ M L E = arg max θ ∑ i = 1 N log [ ∑ k = 1 K N ( x ( i ) ∣ μ k , Σ k ) ⋅ p k ] \hat \theta_{MLE} = \mathop{\arg\max}\limits_{\theta} \sum_{i=1}^N \log \left[\sum_{k=1}^{\mathcal K} \mathcal N(x^{(i)}\mid \mu_k,\Sigma_k) \cdot p_k\right] θ^MLE=θargmaxi=1∑Nlog[k=1∑KN(x(i)∣μk,Σk)⋅pk]
这种 log \log log中包含连加号 的形式是无法化简的,实际上,如果继续对 θ \theta θ求解,会发现参数 θ \theta θ无法求出解析解。 我们对参数集合 θ \theta θ中的第一个元素: p 1 p_1 p1求解析解进行示例。
- 设似然函数为 L ( θ ) \mathcal L(\theta) L(θ), L ( θ ) \mathcal L(\theta) L(θ)表示如下: L ( θ ) = ∑ i = 1 N log [ ∑ k = 1 K N ( x ( i ) ∣ μ k , Σ k ) ⋅ p k ] ( θ = { p 1 , p 2 , ⋯ , p K , μ 1 , μ 2 , ⋯ , μ K , Σ 1 , Σ 2 , ⋯ , Σ K } ) \mathcal L(\theta) = \sum_{i=1}^N \log \left[\sum_{k=1}^{\mathcal K} \mathcal N(x^{(i)}\mid \mu_k,\Sigma_k) \cdot p_k\right] \quad (\theta = \{p_1,p_2,\cdots,p_{\mathcal K},\mu_1,\mu_2,\cdots,\mu_{\mathcal K},\Sigma_1,\Sigma_2,\cdots,\Sigma_{\mathcal K}\}) L(θ)=i=1∑Nlog[k=1∑KN(x(i)∣μk,Σk)⋅pk](θ={p1,p2,⋯,pK,μ1,μ2,⋯,μK,Σ1,Σ2,⋯,ΣK})
- 将上述公式展开,结果如下: L ( p 1 , ⋯ , p K , μ 1 , ⋯ , μ K , Σ 1 , ⋯ , Σ K ) = ∑ i = 1 N log [ N ( x ( i ) ∣ μ 1 , Σ 1 ) ⋅ p 1 + ⋯ + N ( x ( i ) ∣ μ K , Σ K ) ⋅ p K ] = { log [ N ( x ( 1 ) ∣ μ 1 , Σ 1 ) ⋅ p 1 + ⋯ + N ( x ( 1 ) ∣ μ K , Σ K ) ⋅ p K ] } + ⋯ + { log [ N ( x ( N ) ∣ μ 1 , Σ 1 ) ⋅ p 1 + ⋯ + N ( x ( N ) ∣ μ K , Σ K ) ⋅ p K ] } \mathcal L(p_1,\cdots,p_{\mathcal K},\mu_1,\cdots,\mu_{\mathcal K},\Sigma_1,\cdots,\Sigma_{\mathcal K}) = \sum_{i=1}^{N} \log \left[\mathcal N(x^{(i)} \mid \mu_1,\Sigma_1) \cdot p_1 + \cdots + \mathcal N(x^{(i)} \mid \mu_{\mathcal K},\Sigma_{\mathcal K}) \cdot p_{\mathcal K}\right] \\ = \left\{\log \left[\mathcal N(x^{(1)} \mid \mu_1,\Sigma_1) \cdot p_1 + \cdots + \mathcal N(x^{(1)} \mid \mu_{\mathcal K},\Sigma_{\mathcal K}) \cdot p_{\mathcal K}\right]\right\} + \cdots + \left\{\log \left[\mathcal N(x^{(N)} \mid \mu_1,\Sigma_1) \cdot p_1 + \cdots + \mathcal N(x^{(N)} \mid \mu_{\mathcal K},\Sigma_{\mathcal K}) \cdot p_{\mathcal K}\right]\right\} L(p1,⋯,pK,μ1,⋯,μK,Σ1,⋯,ΣK)=i=1∑Nlog[N(x(i)∣μ1,Σ1)⋅p1+⋯+N(x(i)∣μK,ΣK)⋅pK]={log[N(x(1)∣μ1,Σ1)⋅p1+⋯+N(x(1)∣μK,ΣK)⋅pK]}+⋯+{log[N(x(N)∣μ1,Σ1)⋅p1+⋯+N(x(N)∣μK,ΣK)⋅pK]}
- 上述公式中共包含 N N N项连加,并且每一项均包含 p 1 p_1 p1。令 L ( θ ) \mathcal L(\theta) L(θ)对 p 1 p_1 p1求偏导。因而 p 2 , ⋯ , p K , μ 1 , ⋯ , μ K , Σ 1 , ⋯ , Σ K p_2,\cdots,p_{\mathcal K},\mu_1,\cdots,\mu_{\mathcal K},\Sigma_1,\cdots,\Sigma_{\mathcal K} p2,⋯,pK,μ1,⋯,μK,Σ1,⋯,ΣK均视作常数。为简化步骤,以第一项为例: ∂ ∂ p 1 log { [ N ( x ( 1 ) ∣ μ 1 , Σ 1 ) ⋅ p 1 + ⋯ + N ( x ( 1 ) ∣ μ K , Σ K ) ⋅ p K ] } = 1 N ( x ( 1 ) ∣ μ 1 , Σ 1 ) ⋅ p 1 + ⋯ + N ( x ( 1 ) ∣ μ K , Σ K ) ⋅ p K ⋅ ∂ ∂ p 1 [ N ( x ( 1 ) ∣ μ 1 , Σ 1 ) ⋅ p 1 + ⋯ + N ( x ( 1 ) ∣ μ K , Σ K ) ⋅ p K ] \begin{aligned} & \frac{\partial}{\partial p_1}\log \left\{\left[\mathcal N(x^{(1)} \mid \mu_1,\Sigma_1) \cdot p_1 + \cdots + \mathcal N(x^{(1)} \mid \mu_{\mathcal K},\Sigma_{\mathcal K}) \cdot p_{\mathcal K}\right] \right\}\\ & = \frac{1}{\mathcal N(x^{(1)} \mid \mu_1,\Sigma_1) \cdot p_1 + \cdots + \mathcal N(x^{(1)} \mid \mu_{\mathcal K},\Sigma_{\mathcal K}) \cdot p_{\mathcal K}}\cdot \frac{\partial}{\partial p_1}\left[\mathcal N(x^{(1)} \mid \mu_1,\Sigma_1) \cdot p_1 + \cdots + \mathcal N(x^{(1)} \mid \mu_{\mathcal K},\Sigma_{\mathcal K}) \cdot p_{\mathcal K}\right] \end{aligned} ∂p1∂log{[N(x(1)∣μ1,Σ1)⋅p1+⋯+N(x(1)∣μK,ΣK)⋅pK]}=N(x(1)∣μ1,Σ1)⋅p1+⋯+N(x(1)∣μK,ΣK)⋅pK1⋅∂p1∂[N(x(1)∣μ1,Σ1)⋅p1+⋯+N(x(1)∣μK,ΣK)⋅pK] 观察方括号中的项,只有第一项包含 p 1 p_1 p1,其余项均不含 p 1 p_1 p1。因此: ∂ ∂ p 1 [ N ( x ( 1 ) ∣ μ 1 , Σ 1 ) ⋅ p 1 + ⋯ + N ( x ( 1 ) ∣ μ K , Σ K ) ⋅ p K ] = ∂ ∂ p 1 [ N ( x ( 1 ) ∣ μ 1 , Σ 1 ) ⋅ p 1 ] = N ( x ( 1 ) ∣ μ 1 , Σ 1 ) \begin{aligned} & \frac{\partial}{\partial p_1}\left[\mathcal N(x^{(1)} \mid \mu_1,\Sigma_1) \cdot p_1 + \cdots + \mathcal N(x^{(1)} \mid \mu_{\mathcal K},\Sigma_{\mathcal K}) \cdot p_{\mathcal K}\right] \\ & = \frac{\partial}{\partial p_1} \left[\mathcal N(x^{(1)} \mid \mu_1,\Sigma_1) \cdot p_1\right] \\ & = \mathcal N(x^{(1)} \mid \mu_1,\Sigma_1) \end{aligned} ∂p1∂[N(x(1)∣μ1,Σ1)⋅p1+⋯+N(x(1)∣μK,ΣK)⋅pK]=∂p1∂[N(x(1)∣μ1,Σ1)⋅p1]=N(x(1)∣μ1,Σ1) 因此, L ( θ ) \mathcal L(\theta) L(θ)第一项对 p 1 p_1 p1的偏导结果为: N ( x ( 1 ) ∣ μ 1 , Σ 1 ) N ( x ( 1 ) ∣ μ 1 , Σ 1 ) ⋅ p 1 + ⋯ + N ( x ( 1 ) ∣ μ K , Σ K ) ⋅ p K \frac{\mathcal N(x^{(1)} \mid \mu_1,\Sigma_1)}{\mathcal N(x^{(1)} \mid \mu_1,\Sigma_1) \cdot p_1 + \cdots + \mathcal N(x^{(1)} \mid \mu_{\mathcal K},\Sigma_{\mathcal K}) \cdot p_{\mathcal K}} N(x(1)∣μ1,Σ1)⋅p1+⋯+N(x(1)∣μK,ΣK)⋅pKN(x(1)∣μ1,Σ1) 同理, L ( θ ) \mathcal L(\theta) L(θ)全部 N N N项对 p 1 p_1 p1的求导结果为: ∂ L ( θ ) ∂ p 1 = ∑ i = 1 N N ( x ( i ) ∣ μ k , Σ k ) ∑ k = 1 K N ( x ( i ) ∣ μ k , Σ k ) ⋅ p k \frac{\partial \mathcal L(\theta)}{\partial p_1} = \sum_{i=1}^N \frac{\mathcal N(x^{(i)} \mid \mu_k,\Sigma_k)}{\sum_{k=1}^{\mathcal K} \mathcal N(x^{(i)} \mid \mu_k,\Sigma_k)\cdot p_k} ∂p1∂L(θ)=i=1∑N∑k=1KN(x(i)∣μk,Σk)⋅pkN(x(i)∣μk,Σk) 令 ∂ L ( θ ) ∂ p 1 ≜ 0 \frac{\partial \mathcal L(\theta)}{\partial p_1} \triangleq 0 ∂p1∂L(θ)≜0,则有: N ( x ( 1 ) ∣ μ 1 , Σ 1 ) = N ( x ( 2 ) ∣ μ 1 , Σ 1 ) = ⋯ = N ( x ( N ) ∣ μ 1 , Σ 1 ) = 0 \mathcal N(x^{(1)} \mid \mu_1,\Sigma_1) = \mathcal N(x^{(2)} \mid \mu_1,\Sigma_1) = \cdots = \mathcal N(x^{(N)} \mid \mu_1,\Sigma_1) = 0 N(x(1)∣μ1,Σ1)=N(x(2)∣μ1,Σ1)=⋯=N(x(N)∣μ1,Σ1)=0 因而没有求得 p 1 p_1 p1的解析解。 同理,其他模型参数同样无法求出解析解。
使用极大似然估计求解高斯混合模型的参数 θ \theta θ宣告失败。
下一解将介绍使用EM算法求解高斯混合模型的模型参数。
相关参考: 机器学习-高斯混合模型(2) -极大似然