I'm not much good with math either, especially not with the abstract part, but let me try to explain what I meant. Then you (and anyone else coming across this) can think whether this is a correct approach or not, and let me know, as I am out of touch with statistics for some time.
Suppose this is your distribution under consideration MN(n, p_1, p_2, p_3), where p_1 + p_2 + p_3 = 1, with the PMF being the following, where x_1 + x_2 + x_3 = n:
f(x_1, x_2, x_3 ; p_1, p_2, p_3) =constant \times p_1^{x_1} \times p_2^{x_2} \times p_3^{x_3}
For the unconstrained optimisation, the MLE's will be \hat{p_1}=\frac{x_1}{n}, \hat{p_2}=\frac{x_2}{n}, \hat{p_3}=\frac{x_3}{n}, and using these in the above expression, you can find the supremum in the denominator.
For the constrained optimisation, let's assume that your null hypothesis is the equality of p_1 and p_2. Then, in that case you can rewrite the PMF as follows:
g(x_1, x_2, x_3; q_1, q_2) = \text{another_constant} \times q_1^{x_1 + x_2} \times q_3^{x_3}
where q_1 = p_1 = p_2 and q_2 = p_3. Then, the MLE's will be \hat{q_1}=\frac{x_1 + x_2}{n}, \hat{q_2}=\frac{x_3}{n}. Using these, ou can find the supremum in the numerator.
Does any of these make sense?