Walled Gardens: The Trade-off Between Security and Modifiability

We may earn a commission for purchases made using our links.

With the recent changes in Android 6.0 (Marshmallow) looking set to make life much more difficult for tinkerers, tweakers and modders, a question people often ask me is “why?” – why does (Company name) want to stop me modifying my phone?

In this article, I aim to give a (hopefully) complete run-down of many of the factors at play here, and the motivations of involved parties, and who they actually are. There’s no way I will manage to completely cover every angle, but I shall give it my best shot – feel free to add anything you think I forgot in the comments below.

Who’s Involved?

The actors in our saga of device modification are not as straightforward to identify as you might think. Obviously there’s the owner of the device, who is (sometimes) distinct from the end-user of the device, for example in the case of a company-provided device. Then there’s (fairly obviously) the manufacturer of the device, such as Sony or Samsung. There’s more though! For you also need to consider (especially in the USA and a number of non-European markets) the carrier which provides service on the device. Then you have the retailer, who sold the device to you (which may or may not be the carrier, so we’ll leave them aside).

At this point, you’d be forgiven for thinking we’re done. But we’re not! There’s then Google, the developer of the underlying operating system, to take into consideration. As well as the developers of pre-loaded software on the device. Finally, we have the developers of third party software you install after buying the phone.

Some Background on Trusted Computing

The field of trusted computing is not a recent invention. It has its most recent direct roots back in the Trusted Computing Platform Alliance, which itself was founded in 1999, although the idea has been floated at various points in the past. I won’t go into these in detail, save for a brief overview of the concept. The (very high level) overview of trusted computing is the premise of making the user’s computing device (i.e. the client device), such as your phone, tablet, laptop or desktop, operate in a trustworthy manner. This all sounds good – who wouldn’t want a more trustworthy computer, after all? Trust is great, it makes our lives easier.

The problem is the definition of trust is somewhat complex, as it ultimately depends on who classifies something as “trusted” or “trustworthy”. All definitions so far of trustworthy tend to revolve around “only does what the corporation making it wants”, or “behaves the way the corporation designing it intended it to behave”. The problem is that this therefore requires you give absolute trust to the company defining this trustworthy operation. And, frankly, you can’t do that.

Absolute power corrupts, and in the same way, absolute trust will corrupt – look at the past furores caused by privacy violations being carried out by companies. Indeed, it has reached such a stage that the EU has decided to no longer recognise the EU-US Safe Harbor agreement as offering protection for people’s private data.

While, to you or I, the idea of a trusted phone would be really useful and beneficial, the problem is our definition of trust is entirely orthogonal to that of the corporation defining trust. For example, I would like a trustworthy device to:

Only allow applications I choose to access data I choose

Ensure my data is used only for purposes I agree to

Prevent any actions from being carried out on my device which will act against my interests

Ensure that any attempt to violate this trust results in me being notified of the violation

Allow me to take ultimate control of the device (like root access) in the event I wish to override the system

This works fine, because I’m a technical user, who wants the ability to have overall control. The problem is we all have different views of our priorities. You might be more interested in protecting the /sdcard/DCIM folder on your device, if you have some pictures there you don’t want to see on the front page of the seedier parts of the internet. Or you might be less concerned about protecting your data from web services, but more concerned about ensuring a commercial rival cannot remotely “hack” into your phone.

Trust is difficult, and everyone has their own different goals. I’ll now take a look at the different entities we discussed before, and what they want from your device. I’ll then return back to look at the new security measures we see on recent devices.

Manufacturer

The manufacturer (often referred to as OEM) of your phone has a few considerations. Firstly, they want to protect their reputation. They don’t want headlines about their devices being hacked, or shipping malware when customers receive it, as we saw with some grey-box imports of the OnePlus Two recently. If you have an untrustworthy supply chain, operating on low margins, with seedy people involved, it’s not a huge surprise to find some less-than-legitimate software being pre-installed onto devices. At least not for me, having looked at many hundreds of devices shipped by dozens of manufacturers, each with their own supply chains and fulfilment infrastructures.

Manufacturers also want to protect their own IP (intellectual property) in some cases. (Warning, mini-rant coming!) For example, Sony Mobile are widely known for using the TrustZone to control access to certain features on their devices (such as Bravia Engine screen vibrance, and camera post-processing to improve image quality). While Sony devices ship with unlockable bootloaders, availing yourself of this will result in the necessary keys being wiped, to prevent these features from being re-used. While it’s possible to restore them in future, if you manage to gain root and backup the TA (trim area, a relic from old non-smartphone devices) partition, this also re-locks the bootloader, preventing you from using a custom kernel on the device.

Fundamentally, though, we really don’t see the level of copying that I believe manufacturers think is out there. How many OEMs have set out to copy Sony’s camera sensors, then gone “oh dang, we can’t get their post-processing, let’s abandon this”? In fact, considering even Samsung uses Sony camera sensors, it seems Sony is doing fairly well. Considering the reputation of the S6 camera, I’d suggest that Sony really don’t need to worry about people “stealing” their post-processing technology.

Dear Sony Mobile, maybe if you stopped spending so much time trying to “protect” your camera “IP”, and more time either licensing Samsung’s, or developing actual imaging software improvements, your software would be better?

(End mini rant!) In any case, OEMs have an interest in ensuring their device integrity. Also, as the ultimate “guarantor” of the device warranty, it’s in their interest to keep returns down. While true hardware damage as a result of rooting is relatively rare, it’s not unheard-of. Over-volting your CPU is (or at least used to be) entirely possible to carry out using a custom kernel, either significantly shortening the life of the processor, or potentially even just outright frying it.

Retailers don’t want high return rates on products, so it’s in the manufacturers interest to ensure this remains the case. By locking down their products, they reduce the risk of a high return rate as a result of “unauthorised” modifications.

Carriers

Carriers. The dinosaurs of the industry. They are often seen as marmite, depending on your “side” – either they are those who refuse to accept they are just a pipe to carry traffic, or those who provide value-added services to users. These days, though, I suspect the majority of readers here will view their carrier as simply a “dumb pipe” whose duty it is to move IP packets to and from their device. That’s the modern definition, but it’s important to remember this wasn’t always the case. Carriers built their business models around the provision of “value-added” services, and the rise of the mobile internet is a big concern for them.

In days gone by, carriers offered voice calling, and SMS (yes, yes, to anyone about to point this out, I know SMS was a later invention to make use of spare signalling traffic capacity, but this is a high-level overview!) At this time, they began to introduce some premium-rate services, operated by themselves, allowing themselves to earn extra revenue. For example, anyone old enough to remember the original GSM handsets may recall premium rate SMS services to get weather forecasts, or horoscopes, or similar. Originally, many of these were actually operated as carriers. Nobody really understood what customers wanted, so everyone decided to innovate and try everything! As a result, if you were willing to pay, your carrier would send you a message with the lottery results each week.

“Carriers are used to being in control of your experience” The move towards WAP (a horribly simplistic version of the internet, typically based around a “voice call” that transferred data) allowed networks to create their own little “walled gardens” of value added services.

This was partially due to the extremely limited capabilities of mobile phone browsers at the time, and partially due to it being the only way the carriers understood.

At this point, I’ll wrap up our history lesson – in summary, carriers are used to a model where they are in control of devices, and they have near-exclusivity over service provision on handsets. They felt they were the curator of your access to the internet, or the wider world, and that you should use services they offered, which earned them money. Carriers aren’t done here though, with new technologies like RCS (Rich Communication Services) emerging.

The long-and-short is that carriers are used to being in control of your experience. They do deals with companies to pre-load software (which I tend to refer to with a tirade of profanities, or as simply “bloat”), and want to set the experience you’ll see on your device. It’s important to remember, though, that carriers are also often the first line of support for a user since the carrier provides the handset to many contract users. The more variation that is possible on a device, the more complexity for support. This is one reason that carriers were initially sceptical of Android – I know of some who were concerned at the potential support costs to hand-hold users on early Android devices, with the potential to significantly customise a device beyond what they were used to (the idea of removing and hiding application icons, or of changing the default launcher, for example).

To consider things from the carrier’s perspective, the idea that a user may accidentally remove the carrier bill-viewer app from the home-screen, and be unable to find it, is simply a support nuisance. It’s something they’ve got used to, but it’s something to consider – carriers worry about support costs, and confusion for users. Coupled with their history, many carriers want absolute control over the devices they ship, and this leads to locked bootloaders.

Device Owners

As mentioned in the introduction, the owner of the device isn’t always the end-user of the device. Business and enterprise users have a myriad of security policies, many sensible, many entirely arbitrary, and these need to be followed. There is now a plethora of different MDM (mobile device management) solutions available for Android devices, allowing enterprises to manage them, push out software, and generally update and manage the devices. Despite this, it’s of critical importance to Google (and to an extent, the manufacturers) that their devices are not “blacklisted” from enterprise use. Enterprise sales are significant, and the ability to win a contract for 100,000 handset sales to a company, which will then pay for extra support, and a variety of other software solutions, is not to be sniffed at. While I don’t have access to any sales figures to compare enterprise and consumer sales, I suspect (anecdotally at least) that enterprises are spending a lot more, given the quantities of devices they may buy, and the amount they spend on extra functionality or support offerings from the OEMs.

“The ability for a device to be “rooted” by the end user is a big worry for companies” For enterprises to use a device, they need to be satisfied it complies with various policies. A core part of the Android security model is the isolation of data between applications. For example, Exchange email shouldn’t be accessible by Snapchat or another application installed by the user.

While “Android for Work” has certainly begun to help here, many companies do not want end users having access to the raw data held on their devices, for security reasons. For right or for wrong, many companies are stuck in a belief that end users aren’t (or shouldn’t be) in control of their own messages, and that the company is. The thought that you could potentially back up your contacts data before handing back your company phone is simply unacceptable to many companies. For this reason, many enterprises put email and contacts through special apps which don’t allow exporting, or copy-paste, and only work when they are permitted to.

The ability for a device to be “rooted” by the end user is a big worry for companies, as rooting breaks the (rather ineffective, to be frank) protections they are placing on “their” data residing on your device. Rather than try to enforce it through technical measures, it would be more sensible to hire better employees you don’t need to let go of regularly (or offer them better compensation to discourage them from leaving for other companies), and have a policy in place. Nonetheless, IT departments love to be able to say it’s not possible for an employee to “steal” company data using their mobile device, and anything which threatens this (root, unlocked bootloaders) will always be a worry for them.

For enterprises, a locked down Android, as tightly locked down as iOS, is ideal. The trouble is this often isn’t what end users actually want.

Rights Holders and Developers

The people who make the software you run on your device are also important to consider. Bear in mind that dm-verity, the next big threat to being able to root your device, was developed by Google, and publicly backed by Netflix. Why, you might ask? Well, because Netflix want to know that when you view content on an Android, or other device, you can’t copy it, or access it in a way they don’t like. Imagine a future where you need to pay an extra $5 per month for the privilege of watching streaming videos (which you already pay for) over your tablet’s HDMI output! That’s what we’re heading towards – if companies like Netflix get more control over your device (by being able to verify your device is unmodified and, therefore doing what it expects), they will undoubtedly try out more restrictions on user freedom. While it’s certainly possible that they only want to stop people from “pirating” their content (as though ripping media from Netflix is the means of choice for those who do these kinds of things), it’s highly likely that these kinds of restrictions would be used to restrict user freedoms.

Heck, on a trusted platform, imagine a piece of software using your webcam to ensure that the person using your Spotify or Netflix account is indeed you, and that you’re not sharing access with a family member or other person that’s not “registered” on the account – with a “trusted computing platform”, that kind of thing is possible.

It’s in the interest of app developers, though, to ensure their apps work right. And, increasingly in this day and age, that means ensuring they lock users into a proprietary web service back-end, in order to keep them supplying data to their servers.

More generally, app developers aren’t all streaming DRM-encumbered content to your device. Many just want to try to stop you from stealing their app, or working around arbitrary limitations in their app. The problem is that in computing, people tend to like working around arbitrary limitations – they take it as a challenge. An app that’s designed to only show an image once, and to then destroy it to prevent future viewing, is the technological equivalent of waving a red rag at a bull, then standing there and giving it the middle finger. Developers, geeks, tweakers and modders will work around silly (in my view) arbitrary restrictions like these.

The problem is that companies like Snapchat (the service I’m describing above, in case you didn’t realise) are fundamentally built as non-technical businesses, trying to make a technical product. Snapchat don’t seem to understand that the very premise of their entire product is fundamentally flawed, and cannot ever work truly securely, except on an unrealistic trusted computing platform. Even then, someone can take a photo of their screen with another device. So you’d have to ban cameras, and ensure that no other device with a camera would ever allow you to take a photo of a screen showing a Snapchat image.

It’s in the interest of app developers, though, to ensure their apps work right. And, increasingly in this day and age, that means ensuring they lock users into a proprietary web service back-end, in order to keep them supplying data to their servers. For example, almost all of the current messaging and communications apps (Telegram, WhatsApp, Facebook Messenger, etc) are based around proprietary protocols or servers (in the case of Telegram, whose client and protocol is open, but with no server available to run yourself), to keep your data within that app’s ecosystem. This is often counter to your interests, but in the interests of the company running the service. Preventing users from modifying their devices makes it easier for companies to charge you for things you take for granted today. For example, consider the messaging apps which charge you for packs of “smilies” or similar. Now imagine a future where they can use a “trusted computing platform” to prevent you from loading any third party smilies into the app (even by modifying the device), in order to force you to buy their own smilie pack, and pay a monthly fee to retain access.

These are just some of the potential horrors awaiting users on “closed platforms”, when they lose control of their equipment.

Final Thoughts

It’s important to put this all into context. You most likely have administrator access to your laptop or desktop computer. If you run a Linux or OSX machine, you have root access to it, via the sudo or su commands. Yet on mobile devices, this is something being rapidly taken away from us, on account of “protecting” users. Is this a valid argument, or is it simply a bogus excuse to force us into locked-down computing platforms, where we cannot block the endless tidal wave of cross-site behaviour-tracking advertising from software? It’s clear there are reasons root access puts people at risk (the ease with which their data can be stolen by malicious software is one), but it’s not clear that locked-down platforms are the solution, especially where the keys to those platforms reside with companies who don’t put your interests as their first priority.

Indeed, any company is required legally to puts its shareholders interests at the first priority (well, at a high priority – the company exists to make money for its shareholders, but it’s still (usually) required to obey the law etc). That’s how companies work. For that reason, it seems unlikely that removing a user’s right to tinker, tweak, modify and adjust their device will benefit users in the long run. While making it harder for non-technical users to have their data stolen is good, it’s hard to really argue that anyone involved in the current ecosystem has your interests at heart, other than yourself.

Maybe it’s time for some freedom-preserving devices, backed by the EFF and FSF? Ones where you can become the root of trust of your device, if you feel that’s what you want. That’s how secure boot works in UEFI on the PC platform; perhaps it’s time we did that on Android. At least it would let you remain in control of the one device you carry everywhere, fitted with a potentially always-listening microphone, GPS sensor, and most of your digital life in tow.