Q&A



What are the main differences to Unity’s current input system?

Aside from this fundamental architectural difference, we’ve tried to solve a wide range of issues that users have with the functionality of Unity’s current system as well as build a system that is able to cope with the challenges of input as it looks like today. Unity’s current system dates back to when keyboard, mouse, and gamepad were the only means of input for Unity games. On a general level, that means having a system capable of dealing with any kind of input -- and output, too (for haptics, for example).



How close to feature complete is the system?

How can I run this?

Are action maps still part of the system?

In the old model, actions were controls that had values. In the new model, actions are monitors that detect changes in state in the system. That extends to being able to detect patterns of change (e.g. a “long tap” vs a “short tap”) as well as requiring changes to happen in combination (e.g. “left trigger + A button”).



Is it still event based?

Is it extensible?

What were the performance problems of the previous event model?

I’m seeing things in a namespace called ‘ISX’. What’s that about?

What are the plans for migrating to this from the old system?

Are the APIs that are already there reasonably close to final?