$\begingroup$

Randomized (polynomial time, boolean result) algorithms are in the RP computational complexity class, which is a subset of NP where non-deterministic (polynomial time, boolean result) algorithms reside and a superset of P where deterministic (polynomial time, boolean result) algorithms reside.

Subsetting complexity is about reducing problems in one set to another set. Thus RP ⊆ NP does exclude the possibility of randomized algorithms which are also non-deterministic because definitionally a superset contains the subset. Subset means every RP algorithm (or any RP-complete algorithm) can be reduced to some NP algorithm (or any NP-complete algorithm). P is a subset of RP because every problem in P can be reduced to a problem in RP where the amount of uncontrolled entropy is 0.

Tangentially, this is analogous to how every problem in NC (parallel computation) can be reduced to a problem in P by simulating the parallel computation in a reduction to a serial problem in P but it is not yet proven that the converse is true, i.e. that every problem in P is reducible to a problem in NC, nor proven not true, i.e. the implausible proof that a P-complete problem is not reducible to a problem in NC. It may be possible that there are problems that are inherently serial and can't be computed in parallel, but to prove that prove P ≠ NC seems to be implausible (for reasons too tangential to discuss in this answer).

More generally (i.e. not limited to boolean result types), randomized algorithms are distinguished from deterministic algorithms in that some of the entropy is externally sourced. Randomized algorithms are distinguished from non-deterministic algorithms because the entropy is bounded, and thus randomized (and not non-deterministic) algorithms can be proven to always terminate.

The unpredictability of nondeterministic algorithms is due to inability to enumerate all the possible permutations of the input entropy (which results in unpredictability of termination). The unpredictability of a randomized algorithm is due to inability to control all of the input entropy (which results in a unpredictability of an indeterminate result, although the rate of unpredictability can be predicted). Neither of these are statements about unpredictability of the correct answer to the problem, but rather the unpredictability manifests in the side-channel of termination and indeterminate result respectively. It seems many readers are conflating unpredictability in one area with unpredictability of the correct result, which is a conflation I never wrote (review the edit history).

It is key to understand that non-determinism is always (in any science or usage of the term) the inability to enumerate universal (i.e. unbounded) entropy. Whereas, randomization refers to accessing another source of entropy (in programs entropy other than and thus not under the control of the input variables) which may or may not be unbounded.

I added the following comment below the currently most popular answer to the other thread that asks a similar question.

All sciences use the same definition of nondeterminism unified on the concept of unbounded entropy. Unpredictable outcomes in all sciences are due to the inability to enumerate a priori all possible outputs of an algorithm (or system) because it accepts unbounded states, i.e. NP complexity class. Specifying a particular input to observe whether it halts and noting that the result is idempotent is equivalent in other sciences to holding the rest of the entropy of the universe constant while repeating the same state change. Computing allows this entropy isolation, while natural sciences don't.

Adding some of the best comments to add clarification of my point about the only salient distinction between randomized and nondeterministic.

It is really quite elegant and easy to see the distinction, once you all stop muddling it by trying to describe it from an operational point-of-view instead of from the salient entropy point-of-view.

@reinierpost everyone is conflating the difference between randomized and nondeterministic. This causes your comment to be muddled. The algorithm responds to the interaction of the input (variable) entropy and its source code (invariant) internal entropy. Nondeterminism is unbounded entropy. Invariant entropy can even be internally unbounded such as expanding the digits of π. Randomized is some of the entropy is not coupled to the input as defined (i.e. it may be coming from a system call to /dev/random , or simulated randomness e.g. NFA or a PRNG).

.

@Raphael formal definition of non-deterministic finite automa (NFA) is finite input entropy (data: the 5-tuple). Thus every NFA can run on a deterministic Turing machine, i.e. doesn't require a nondeterministic Turing-complete machine. Thus NFAs are not in the class of nondeterministic problems. The notion of "nondeterminism" in NFA is that its determinism (while clearly present since every NFA can be converted to a DFA) is not explicitly expanded — not the same as nondeterminism of computation

.

@Raphael the claimed "non-determinism" in NFAs is really randomness is sense of my definition of the distinction between randomness & nondeterminism. My definition is that randomness is where some of the entropy that is not under the control, knowledge (, or desired non-explicit expansion in the case of a NFA) of the input to the program or function. Whereas, true nondeterminism is the inability to know the entropy in any case, because it is unbounded. This is precisely what distinguished randomized from nondeterminism. So NFA should be an example of the former, not the latter as you claimed.

.

@Raphael as I explained already, the notion of non-determinism in NFAs couple the non-deterministic with the finite entropy. Thus the non-determinism is a local concept of not expanding the determinism as a form of compression or convenience, thus we don't say NFAs are non-deterministic, rather they possess appearance of randomness to an oracle unwilling to compute the deterministic expansion. But it is all a mirage because it call be expanded deterministically bcz the entropy is not unbounded, i.e. finite.

Dictionaries are tools. Learn to use them.

random adjective Statistics. of or characterizing a process of selection in which each item of a set has an equal probability of being chosen. being or relating to a set or to an element of a set each of whose elements has equal probability of occurrence

Thus randomization only requires that some of the input entropy be equiprobable, which is thus congruent with my definition that some of the input entropy not be controlled by the caller of the function. Notice that randomization does not require that the input entropy be undecidable w.r.t. to termination.

In computer science, a deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. Formally, a deterministic algorithm computes a mathematical function; a function has a unique value for any input in its domain, and the algorithm is a process that produces this particular value as output. Deterministic algorithms can be defined in terms of a state machine: a state describes what a machine is doing at a particular instant in time. State machines pass in a discrete manner from one state to another. Just after we enter the input, the machine is in its initial state or start state. If the machine is deterministic, this means that from this point onwards, its current state determines what its next state will be; its course through the set of states is predetermined. Note that a machine can be deterministic and still never stop or finish, and therefore fail to deliver a result.

So this is telling us that deterministic algorithms must be completely determined by the input state of the function, i.e. we must be able to prove that the function will terminate (or not terminate) and that can't be undecidable. In spite of Wikipedia's muddled attempt to describe nondeterministic, the only antithesis to deterministic as defined above by Wikipedia, is are algorithms whose input state (entropy) is ill-defined. And the only way the input state can be ill-defined is when it is unbounded (thus can't be deterministically preanalyzed). This is precisely what distinguishes a nondeterministic Turing machine (and many real world programs which are written in common Turing complete languages such as C, Java, Javascript, ML, etc..) from deterministic TMs and programming languages such as HTML, spreadsheet formulas, Coq, Epigram, etc.. Wikipedia sort of alludes to this:

In computational complexity theory, nondeterministic algorithms are ones that, at every possible step, can allow for multiple continuations (imagine a man walking down a path in a forest and, every time he steps further, he must pick which fork in the road he wishes to take). These algorithms do not arrive at a solution for every possible computational path; however, they are guaranteed to arrive at a correct solution for some path (i.e., the man walking through the forest may only find his cabin if he picks some combination of "correct" paths). The choices can be interpreted as guesses in a search process.

Wikipedia and others try to conflate randomization with nondeterminism, but what is the point of having the two concepts if you are going to not distinguish them eloquently?

Clearly determinism is about the ability to determine. Clearly randomization is about making some of the entropy equiprobable.

Including random entropy in the state of an algorithm doesn't necessary make it indeterminable. For example a PRNG can have the required equiprobable statistical distribution, yet also be entirely deterministic.

Conflating orthogonal concepts is what low IQ people. I expect better than that from this community!