$\begingroup$

I am puzzled by how this Gibbs sampler on section 6 of Escobar & West (1995) works. To put it in simple words, the aim is to sample $\alpha$. The defined terms are: $$\eta\sim \texttt{Beta}(a,b)$$ and $$\alpha \sim \pi \texttt{Gamma}(\theta,f(\eta))+(1-\pi)\texttt{Gamma}(\theta-1,f(\eta))$$ the paper says (with a bit of simplification)

It is now clear how $\alpha$ can be sampled at each stage of the simulation. At each Gibbs iteration, we first sample $\eta$ from the defined Beta distribution, and use the sampled $\eta$ and the fixed $\theta$ to sample $\alpha$ from the mixture of the Gamma distributions.

the confusing bit is,

On completion of the simulation $p(\alpha|\texttt{Data})$ will be estimated by the usual Monte Carlo averaging $p(\alpha|\texttt{Data})=\sum_{s=1}^{N}p(\alpha|\theta,\eta_s)$, where $\eta_s$ are the sampled values of $\eta$.

Knowing that the aim in here was to sample $\alpha$, why do we need to estimate $p(\alpha|\texttt{Data})$? We already have a sample for $\alpha$, so what is the need to estimate its probability. Also not sure why can we plug in all the sampled values of $\eta$ in this estimation, shouldn't one just use the sampled $\eta$ based on which we sampled the corresponding $\alpha$?

My only explanation: Given all the sampled $\alpha$ (let's put them in a set $S$) for each sampled $\alpha$, we need to compute it's posterior $P(\alpha|\texttt{Data})$. For this, we use all the sampled values for $\eta$ from all the Gibbs iterations to compute the summation. This way each sampled $\alpha$ will get a Monte Carlo averaged posterior estimate. Using the accumulation of all these posterior estimates based on which we sample an $\alpha$ using accumulated posterior estimates of all sampled $\alpha$ in $S$. Is this the correct explanation?

Escobar, M. D., & West, M. (1995). Bayesian density estimation and inference using mixtures. Journal of the american statistical association, 90(430), 577-588.