$\begingroup$

The "focal" association between category $i$ of one nominal variable and category $j$ of the other one is expressed by the frequency residual in the cell $ij$, as we know. If the residual is 0 then it means the frequency is what is expected when the two nominal variables are not associated. The larger the residual the greater is the association due to the overrepresented combination $ij$ in the sample. The large negative residual equivalently says of the underrepresented combination. So, frequency residual is what you want.

Raw residuals are not suitable though, because they depend on the marginal totals and the overall total and the table size: the value is not standardized in any way. But SPSS can display you standardized residuals also called Pearson residuals. St. residual is the residual divided by an estimate of its standard deviation (equal to the sq. root of the expected value). St. residuals of a table have mean 0 and st. dev. 1; therefore, st. residual serves a z-value, like z-value in a distribution of a quantitative variable (actually, it is z in Poisson distribution). St. residuals are comparable between different tables of same size and the same total $N$. Chi-square statistic of a contingency table is the sum of the squared st. residuals in it. Comparing st. residuals in a table and across same-volumed tables helps identify the particular cells that contribute most to chi-square statistic.

SPSS also displays adjusted residuals (= adjusted standardized residuals). Adj. residual is the residual divided by an estimate of its standard error. Interesting that adj. residual is just equal to $\sqrt{N}r_{ij}$, where $N$ is the grand total and $r_{ij}$ is the Pearson correlation (alias Phi correlation) between dummy variables corresponding to the categories $i$ and $j$ of the two nominal variables. This $r$ is exactly what you say you want to compute. Adj. residual is directly related to it.

Unlike st. residual, adj. residual is also standardized wrt to the shape of the marginal distributions in the table (it takes into consideration the expected frequency not only in that cell but also in the cells outside its row and its column) and so you can directly see the strength of the tie between categories $i$ and $j$ - without worrying about whether their marginal totals are big or small relative the other categories'. Adj. residual is also like a z-score, but now it is like z of normal (not Poisson) distribution. If adj. residual is above 2 or below -2 you may conclude it is significant at p<0.05 level$^1$. Adj. residuals are still effected by $N$; $r$'s are not, but you can obtain all the $r$s from adj. residuals, following the above formula, without spending time to produce dummy variables.$^2$

In regard to your second question, about 3-way category ties - this is possible as part of the general loglinear analysis which also displays residuals. However, practical use of 3-way cell residuals is modest: 3(+)-way association measures are not easily standardized and are not easily interpretable.

$^1$ In st. normal curve $1.96 \approx 2$ is the cut-point of 2.5% tail, so 5% if you consider both tails as with 2-sided alternative hypothesis.

$^2$ It follows that the significance of the adjusted residual in cell $ij$ equals the significance of $r_{ij}$. Besides, if there is only 2 columns in the table and you are performing z-test of proportions between $\text {Pr}(i,1)$ and $\text {Pr}(i,2)$, column proportions for row $i$, the p-value of that test equals the significance of both (any) adj. residuals in row $i$ of the 2-column table.