If I may comment as an outsider, it seems as though you guys are getting different subjects mixed up.

The subject line is about the system security model. As the introductory post said, at the moment, Whonix uses a traditional Linux model inherited from Debian, with essentially no isolation between applications. And under that model, a lot of the things you’re talking about seem relatively unimportant.

You seem to have gone afield from the question of making a model change, to talk about point hardening things. And maybe that’s right, because I don’t think you can actually improve the model very much under Debian, Android, or anything else that’s really available. But if you’re going to worry about point hardening, I think you should think first about hardening things that are actually going to be under attack.

Threats and protections

Probably the most interesting remote attackers are people at the “other end” of actual Tor connections. Most of the attack surface available to them is in actual applications, like Web browsers and programs that might be used to view or manipulate downloaded files.

You ship the Tor browser, various archive programs, PDF utilities, media players, etc… and you allow users to install almost anything Debian offers. You can reasonably expect users to downloading Microsoft Word document and viewing them in LibreOffice, to view PDFs with random viewers, to play potentially hostile audio and video files, and to do all sorts of other risky things with applications.

Those application programs can leak data over Tor. More relevant to the sorts of issues you’ve been talking about, though, is the fact that if something manages to completely break through one them, then all of the interesting information inside the workstation becomes available by design (and what’s brilliant about Whonix is that what’s exposed at least does not include the real IP address or computer serial number or whatever).

If you don’t have isolation between the applications, then things like kernel bugs and systemd bugs really don’t matter very much, especially not on the workstation.

The things you’ve talked about that actually harden the applications are things like libc (but I suspect the vast majority of the applications’ bugs are in their own code) and stack protections (which are mostly compiler options that you could perhaps turn on). Changing the model to isolating applications would be a win… but I’m going to argue that that’s too hard.

Kernel hardening and the init process are second order issues. They will only really start to matter after you have application isolation.

On the workstation, there’s almost zero remote kernel attack surface. There is truly zero remote systemd attack surface. If a remote attacker can interact with the workstation kernel or systemd enough to really exploit them, that implies that that attacker already has all the interesting information in the workstation, and has therefore already owned “the user’s data”

The only reason to try to elevate privilege once you were inside the workstation would be to try to attack the host via hypervisor bugs… which may or may not actually require you to be running in kernel mode at all. The hypervisor presents a really large, really weird attack surface for any code code at all running in either the workstation or the gateway VM. And it’s an attack surface that’s easy to forget about.

The gateway VM is a little different from the workstation, but systemd bugs and most kernel bugs still seem relatively uninteresting there.

Both remote attackers and attackers who’ve already owned the workstation will have reasons to target the gateway.

Truly remote attackers, on the other ends of Tor connections, are going to have to get into the workstation first to get access to most of the juiciest targets on the gateway. They have almost no direct access to the gateway’s kernel, only a bit more to the Tor process, and none at all to services like systemd. To even poke the TCP/IP stack, they would have to own the workstation first.

I’m not going to say that nobody at all might attack the gateway from a more local “remote” position… but even an attacker on the LAN only has access to the gateway’s IP stack and relatively limited parts of its Tor process.

So, if you were going to talk about changing OS platforms, I’d suggest you think much more about application hardening and isolation, and much less about stuff that’s far from the attack surface… like the init system.

And now I’m going to argue that you can’t do the isolation.

Isolating in Debian

To isolate applications in Debian or any traditional Linux environment, you’d have to deal with a bunch of stuff that would break applications if you changed it. Long before you hard to worry about the kernel or init system, you’d run into probably-insurmountable issues with things like…

X11. X doesn’t isolate its clients at all. Any program can mess with the keyboard, clipboard, other applications’ displayed windows, and who knows what else. There’ve been various attempts to fix it, but they’re hard to get working and most of them don’t get maintenance. It was a bad design even by 1985 standards. I had to tell the CIA as much in about 1990.

The fact that applications largely expect to share the file system name space with one another, and can get pretty unusable if they don’t share at least a lot of it.

D-Bus. This passes around a huge number of messages that ask for who-knows-what, which results in a complicated security policy, much of it defined per-endpoint by developers who may not have much clue.

Basically I don’t think you can do it, period. It’s too big a project.

I also don’t think you could do it in Android.

Isolating in Android

Android does try to isolate applications from one another. An application at least has a chance of keeping a file private. Something properly written to take advantage of the isolation can get something out of it. A nice contained application like a cryptocurrency wallet can get something out of it.

… but you’d be forced to give your users a lot of applications that weren’t written with so much care, and the system is complicated enough that not only will it probably have breaking bugs, but it almost guarantees bugs in how applications use it.

It’s not really true that Android has a model for isolating applications. What it has is a huge collection of shared resources and IPC endpoints, each with its own ad-hoc set of security restrictions. The whole thing is kind of reminiscent of D-Bus, but even more weird and complicated and used by far more programs. Those resources and restrictions change so much that Android has to formally version the API; each Android app actually declares which version it targets.

A lot of Android’s IPC-based APIs let one application ask another, or some part of the system, to do things that might result in network traffic… and the recipients of those requests rarely worry at all about what information that traffic will leak. Lots of services are architected to expect to work with “the cloud” in various ways. You can take that stuff out (which is what I think GrapheneOS tries to do), but you’re fighting the architecture all the way.

Furthermore, Android’s best supported method of sharing data files among applications, the one that’s by far the most commonly used by actual programs your users might want to run, is to dump them all into a big shared directory tree with no meaningful security controls at all. There’s this nifty “provider” API that nothing uses… and then there’s the unstructured shared storage that everything uses. And even when isolation is enforced, it tends to be more like “give application A access to all data in application B”, rather than “give application A access to this particular document”.

So you get limited practically useful application isolation from Android.

I also agree with Patrick that it would be a bad idea to become dependent on Android. In the end, Google will take Android in whatever direction benefits Google, and that’s not likely to include caring at all about whether the core system is in any way useful for anonymity. It may not include caring about whether AOSP without the Google apps and services is even usable at all. And it probably won’t include caring about privacy in general, or at least about privacy from Google.

For that matter, a lot of real third party Android apps are actively hostile to anything resembling anonymity. Dominant apps actively try to circumvent any privacy controls that Google does bother to put in place. Not only are those apps dangerous in themselves, but they still provide the functionality that users need… which makes it hard to generate demand for alternative apps with better privacy, and drives the overall ecosystem away from what youw want.

A random Debian program has more access to other apps’ data… but is far less likely to be deliberately trying to thwart the aims of Whonix than a random Android program.

You could end up having to maintain a huge amount of divergent code if you went with anything based on Android.

Upshot on isolation

To be honest, I can’t even think of any realistically usable operating system that has a good isolation model. The closest thing would be something like Genode’s Sculpt, and that’s just not ready. In fact, I think you get more actually useful security by being integrated into Qubes than you’d get by going to, say, Android. At least in Qubes the user can take a document off into an isolated VM and work on it.

You may be able to do some ad-hoc sandboxing with namespace-based stuff like bubblewrap. But be careful; that kind of thing is complicated.

Some random comments on hardening

You’ve talked about syscall filters like Apparmor and SELinux (and the more granular modes of things like bubblewrap), and about anti-buffer-overflow measures like canaries, poisoning, ASLR, “check-before-call”, stack frame reorganizations, etc, etc, etc.

All of these “hardening” measures are hacks. They make assumptions about the behavior of programs that aren’t guaranteed to follow those assumptions. If they work, they work. If they don’t, they don’t. And you have no real way to know whether you’ve gotten them right. The buffer overflow protection ones are especially suspect; I don’t think there are any that don’t have relatively generalizable workarounds.

I’m not saying you shouldn’t use hardening or sandboxing… but if you have a choice between “hardening” a random half-assed application (or library), and finding a good, well-written, well-analyzed, well-tested application written using relatively fail-safe tools, I think you’re nearly always going to get better security with the intrinsically safer application.

I don’t think it’s fair to say that SELinux is harder to set up than AppArmor. There’s nothing intrinsically complicated about what SELinux does. And SELinux comes out of the box with a reasonably nice granular default policy… or at least it does on Red Hat/CentOS/Fedora. I don’t really know about the SELinux policy available under Debian.