$\begingroup$

In a pure mathematical sense, you could in principle create models of computation using any sort of recursively composable structure, so long as you can describe how it represents a transformation of suitably represented input data to output data. But in an applied mathematical sense — or more accurately, in an actual scientific sense — there is a question of whether such models of computation correspond to (i.e. models well) anything which is observed in practise (e.g. perhaps because we observe it in machines constructed to do the computations). We're confident that permutation matrices and stochastic matrices, composed by products on local systems, represents a feasible model of computation for transforming probability distributions. It is also accepted in principle that unitary transformations on unit-2-norm wave functions (composed in a similar way) is not unreasonable as a model of computation; showing that it is actually feasible is widely accepted as a (very challenging!) engineering problem.

Both of these models of computation can be subsumed into the formalism of CPTP super-operators (which map linear operators to other linear operators, in a way which preserves the trace, and robustly maps positive-semidefinite operators to other such operators), which in certain respects is a better way of describing quantum computation than by unitary transformations or projectors alone.

Whether there are strictly more general (in the sense of more powerful, and using the same sort of representation of the input and output data) models of computation than unitary transformations or CPTP superoperators is in essence a question of theoretical physics.

So the answer is "maybe — but we don't know yet, and don't have convincing reasons to believe in any particular one".