It's very likely that I misunderstand you somewhere. I want to make an answer that offers multiple references that interrelate with your question, or so I think.

So, from what I understand, subjective ethics refers to when one's personal taste, emotional state, and contextual situation can cause one person to reach a different moral conclusion in a situation over someone else's, whereas objective ethics refers to a fact-based, measurable, reason driven way to determine the one, right solution to any given moral problem.

There's no fixed terminology for that. Generally though, we can differentiate between our ethical beliefs and ethical truths which our beliefs aim at. Depending on our metaethical views, there might be "objective" (or at least universal) ethical truths, relative or subjectivist ethical truths, or simply no ethical truths. Relative or subjectivist here means that the truth value of ethical statements is dependent relative to something.

Under some kinds of Moral Relativism, our opinions on certain things can influence the ethical truths that are relative to us. Then something like emotional state could change the moral truths themselves. But if it does matter then there can't be one right solution because the solution will be dependent on something. For example, our right solution might be dependent on our culture. Or it might be dependent on some base values we hold which aren't universal.

Currently, the view with a slim majority is Moral Realism which holds that there are universal ethical truths (in some sense). (Obviously that doesn't mean that it's correct. It's just to give an idea of the state of the field.)

Emotional state could also play a role on the level of normative ethics. Context of a situation can matter in some views in normative ethics even if morality happens to be universal. Either if the principle we use incorporates it, or if we subscribe to Moral Particularism and hold that moral truths can't be expressed in principles.

Basically, there's this giant table that takes in all of the relevant facts about an individual, and a few general rules about ethics, and using this finite collection of information, outputs the correct moral response to that situation. Any pragmatic subjective or objective framework we come up with down here is simply an approximation of this ideal one described earlier, similar to how a supervised machine learning algorithm describes a procedure to find an approximating function to some ideal, "correct" function for a task.

If I understand correctly what you mean then this wouldn't work. Two senses of "correct response" clash. I'll go through two options: one under the background of Moral Realism, the second under the background of an unspecified Moral Relativism (could just be methodical).

If we want to establish a SINGLE correct response then Moral Realism must be true. But then facts about the individual could only matter insofar they provide context. But we don't construct the right response by balancing moral beliefs, instead there's simply one right belief. Depending on which normative ethical theory is correct, wrong beliefs could matter as context. Two examples:

If a kind of utilitarianism is correct then facts about the individual would tell us how that individual would react to certain acts. For example, person X thinks that lying is wrong, we lie to X and they find out then this might change the result compare to person Y who think lying can be okay sometimes, we lie and Y finds out. If we have an act utilitarianism then this might change the best act depending on the individual it's done torwards. If, say, Kantian deontology is correct then facts about the individual don't matter at all because acts are removed from context under the Categorical Imperative.

If we want to establish a correct moral response from two different moral beliefs without discarding one as wrong with an argument then there can't be a single correct response. Because if an algorithm could be done then there would be parameters that favour one view over another. That's simply because the person making the parameters would hold some moral view that would influence the parameters. Or they could simply be arbitrary. There's no standard to judge "correct" parameters. And, well, making an algorithm in the first place can't be done. Rather, in moral discourse, we would sift out opposing beliefs that contradict themselves, or try to convince someone of our view, or change our view.

But the basis of your idea can lead us to a problem subfield which you might find interesting. It's the issue of "decision under moral uncertainty". The idea is this: if we can't decide with arguments for one moral belief then how are we supposed to act?

We can't direct translate this issue to deciding on collective actions when there are multiple opposing beliefs from multiple people. Then it becomes more of a problem that leads us into political philosophy: how should our be politically arranged with opposing views on morality in a pluralistic society?