This edition contains the following feature content:

This week's edition also includes these inner pages:

Brief items: Brief news items from throughout the community.

Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

The Meltdown CPU vulnerability, first disclosed in early January, was frightening because it allowed unprivileged attackers to easily read arbitrary memory in the system. Spectre, disclosed at the same time, was harder to exploit but made it possible for guests running in virtual machines to attack the host system and other guests. Both vulnerabilities have been mitigated to some extent (though it will take a long time to even find all of the Spectre vulnerabilities , much less protect against them). But now the newly disclosed "L1 terminal fault" (L1TF) vulnerability (also going by the name Foreshadow ) brings back both threats: relatively easy attacks against host memory from inside a guest. Mitigations are available (and have been merged into the mainline kernel ), but they will be expensive for some users.

Page-table entries

Understanding L1TF requires an understanding of the x86 page-table entry (PTE) format. Remember that, in a virtual-memory system, the memory addresses used by both user space and the kernel do not point directly into physical memory. Instead, the hierarchical page-table structure is used to translate between virtual and physical addresses. At the bottom level of this structure, the PTE tells the processor whether the page is actually present in physical memory, where it is, and a few other details. It looks like this for a 4KB page on an x86-64 system:

The page-frame number (PFN) tells the processor where to find the page in physical memory. The other bits control which memory protection key is assigned to the page, access permissions, whether and how the page is cached, whether it is dirty, and more. All of this, though, depends on the present ("P") bit in the least-significant position. If that bit is not set, the page is not actually present in physical memory, and any attempt to reference it will generate a page fault.

For non-present pages, none of the other bits in the page-table entry are meant to be used by the processor, so the kernel can use those bits to store useful information; for example, for pages that have been swapped out, the location in the swap area is stored in the PTE. In other cases, the data left in non-present PTEs is essentially random.

Ignoring the present bit

If the present bit in a given PTE is not set, the PFN number field of that PTE has no defined meaning and the CPU has no business trying to use it. So, naturally, Intel CPUs do exactly that during speculative execution (it would appear that Intel is the only vendor to make this particular mistake). During speculative execution, non-present PTEs are treated as if they were valid, so non-present PTEs can be used to speculatively read whatever data lives in the indicated PFN — but, importantly, only if that data is in the processor's L1 cache. The access is speculative only; the processor will eventually notice that the page is not actually present and generate a page fault instead. But, by the time that happens, the usual sorts of covert channels can be used to exfiltrate the data in whatever page the PTE might have pointed to.

Since this attack goes directly to a physical address, it can in theory read any memory in the system. Notably, that includes data kept within an SGX encrypted enclave, which is supposed to be protected from this kind of thing.

Exploiting this vulnerability requires the ability to run code on the target system. Even then, on its face, this bug is somewhat hard to exploit. Attackers cannot directly create non-present PTEs pointing to a page of interest, so they must depend on such PTEs already existing in their address space. By filling the address space with pages that will eventually get reclaimed or by playing tricks with PROT_NONE mappings, an attacker can essentially throw darts at the system and hope that one hits in an interesting place, but it's a non-deterministic process where it's even hard to tell if one has succeeded.

Nonetheless, the potential for the extraction of important secrets exists, and thus this bug must be defended against. The approach taken here is to simply invert all of the bits in a PTE when it is marked as being not present; that will cause that PTE to point into a nonexistent region of memory. The fix is easy, and the performance cost is almost zero. A quick kernel upgrade, and this problem is solved.

Virtualization

At least, the problem is solved on systems where virtualization is not in use. On systems with virtualized guests then, at a minimum, those guests must also run a kernel using the PTE-inversion technique to protect against attacks. If guests are trusted, or if they cannot install their own kernels, the problem stops here.

But if the system is running with untrusted guests and, in particular, if that system allows those guests to provide their own kernels (as many hosting services do), the situation changes. An attacker can then run a kernel that creates arbitrary non-present PTEs on demand, turning a shot-in-the-dark attack into something that can be targeted with precision. To make an attacker's life even easier, the speculative data reference bypasses the extended page tables in the guest, allowing direct access to physical memory. So an attacker who can install a kernel in a guest instance can attack the host (or other guests) with relative ease. In this context, L1TF can be seen as a limited form of Meltdown that can escape virtualization.

Protecting against hostile guests is a harder task, and the correct answer will depend on the specifics of the workload being run. The first step is to take advantage of the fact that L1TF can only read data that is in the processor's L1 cache. If that cache is cleared every time the kernel transfers control to a virtual machine, there will be no data available for the attacker to read. That is indeed what the kernel will do. This mitigation will be rather more costly, needless to say; how much it costs will depend on the workload. On systems where entries into (and exits from) guests are relatively rare, the cost will be low. On systems where those events are common, the cost could approach a 50% performance hit.

Unfortunately, just clearing the L1 cache is not a complete solution if the CPU is running symmetric multi-threading (SMT or "hyperthreads"). The threads running on that processor share the L1 cache. So, while the hostile guest is running in one thread, an unrelated process could be repopulating the L1 cache with interesting data in the other thread. That clearly reopens the can of worms.

The obvious solution here is to disable SMT, which can potentially protect against other security issues as well. But that clearly comes with a significant performance cost of it own. It is not as bad as simply removing half of the system's processors, but, in a virtual sense, that is exactly what is happening. An alternative is to use CPU affinities to restrict guests to specific processors and to not allow anything else (including, for example, kernel functionality like interrupt handling) to run on those processors. This approach might gain back some performance for specific workloads, but it clearly requires a lot of administrator knowledge about what those workloads are and a lot of manual configuration. It also seems somewhat error-prone.

There is another approach that can be taken to protect hosts from hostile guests: rather than do all of the above, simply disable the use of the extended page-table feature. That forces the system back to the older "shadow page table" mechanism, where the hypervisor retains the ultimate control over all PTEs. This, too, will slow things down significantly, but it provides complete protection since the attacker is no longer able to create non-present PTEs pointing to pages of interest.

As an aside, it's worth pointing out an interesting implication of this vulnerability. Virtualization is generally seen as being more secure than containers due to the extra level of isolation used. But, as we see here, virtualization also requires an extra level of processor complexity that can be the source of security problems in its own right. Systems running container workloads will be only lightly affected by L1TF, while those running virtualization will pay a heavy cost.

Kernel settings

Patched kernels will perform the inversion on non-present PTEs automatically. Since there is no real cost to this technique, there is no reason (and no ability) to turn it off. The flushing of the L1 cache on entry to virtual guests will be done if extended page tables are enabled. The disabling of SMT, though, will not be done by default; administrators of systems running untrusted guests will have to examine the tradeoffs and decide what the best approach is to protect their systems. For people faced with this kind of choice, some more information can be found in Documentation/admin-guide/l1tf.rst

The 4.19 kernel will contain the mitigations, of course. As of this writing, the 4.18.1, 4.17.15, 4.14.63, 4.9.120, and 4.4.148 updates, containing the fixes, are in the review process with release planned on August 16.

As was the case with the previous rounds, the mitigations for L1TF were worked out under strict embargo. The process appears to have worked a little better this time around, with no real leakage of information to force an early disclosure. One can only wonder how many more of these are known and under embargo now — and how many are yet to be discovered. It seems likely that we will be contending with speculative-execution vulnerabilities for some time yet.

Comments (57 posted)

A kernel bug that allows a remote denial of service via crafted packets was fixed recently and the resulting patch was merged on July 23. But an announcement of the flaw (which is CVE-2018-5390) was not released until August 6—a two-week window where users were left in the dark. It was not just the patch that might have alerted attackers; the flaw was publicized in other ways, as well, before the announcement, which has led to some discussion of embargo policies on the oss-security mailing list. Within free-software circles, embargoes are generally seen as a necessary evil, but delaying the disclosure of an already-public bug does not sit well.

The bug itself, which Red Hat calls SegmentSmack, gives a way for a remote attacker to cause the CPU to spend all of its time reassembling packets from out-of-order segments. Sending tiny crafted TCP segments with random offsets in an ongoing session would cause the out-of-order queue to fill; processing that queue could saturate the CPU. According to Red Hat, a small amount of traffic (e.g. 2kbps) could cause the condition but, importantly, it cannot be done using spoofed IP addresses, so filtering may be effective, which may blunt the impact somewhat.

The "semi-embargo" for CVE-2018-5390 came about because CERT was apparently coordinating with Linux distributions (and with FreeBSD on a separate flaw that was reported at the same time). But a tweet by grsecurity highlighted the flaw on July 23; that was followed up with another tweet on July 28 when stable kernels incorporating the fix were released. Matthew Garrett eventually alerted the oss-security mailing list about the bug on August 8. The closed distros and linux-distros mailing lists were used to coordinate the response; those lists have a requirement that, once the bug is made public, the original reporter should post about it to oss-security.

Garrett's post was fairly light on details, which didn't sit well with Stiepan A. Kovac. He asked for more information and seemed to threaten some sort of lawsuit against the "opaque Linux-distros vulnerability-disclosure-among-friends-for-fun-and-profit scheme". While Alexander Peslyak (better known as "Solar Designer"), who founded and moderates all of these security mailing lists (oss-security, distros, linux-distros), agreed that more details were needed and that Kovac's complaint about the semi-embargo raised a real issue, he did think the "focus on legal aspects" was unfortunate. He described the timeline of the bug disclosure as follows:

2018/07/23 - the commit referenced above

2018/07/23 - notification from CERT to some distros

2018/07/23 - grsecurity tweet linking to the commit

2018/07/27 - posting to linux-distros

2018/08/06 - CERT Vulnerability Note published

2018/08/08 - posting to oss-security



Peslyak is not happy about how that all played out. For one thing, the distros lists are meant to be used only for non-public vulnerabilities. The public fix and subsequent tweet publicized the bug fairly widely, which meant that it should not have been handled in "secret" on those lists:

Of course, I am unhappy about this semi-embargo, and even more unhappy about the semi-violation of linux-distros list policy on only having non-public issues in there. However, with CERT involved and with related issues affecting more than just Linux, there was little I could do, short of playing full BOFH and breaking the semi-embargo for everyone. While I think that would have been for the general public's benefit overall, I didn't feel about it strongly enough to actually do it this time.

He speculated on the reasons why the reporting of the bug may have languished, but he also was not pleased with the two-day delay getting any information to oss-security:

It appears that everyone involved, including the CERT people, Matthew, and others commenting on the linux-distros thread, were unhappy about the publication delay. No one I saw said that they wanted the delay. Yet somehow CERT didn't pull the trigger sooner. I guess two weeks feels very soon for CERT as it is, even if it is a very long embargo for linux-distros. Also, I guess the discoverer/reporter of the issue had a say on it behind the scenes, and other related issues and non-Linux were considered in CERT's decision-making. I am also unhappy about the two-day delay between publication of the CERT Vulnerability Note and the mandatory posting to oss-security (it's mandatory since the issue was on linux-distros). I've been pinging off-list to make this happen at all, and would have probably made the posting myself if it didn't happen for another day.

Garrett apologized for the delay in posting to oss-security (which presumably means he was the one that brought the issue to the distros lists) and for the lack of additional information. Peslyak and others did ensure that those details were posted, however. While the distros lists have a long list of policies and procedures for participants and reporters, it seems clear that this particular bug went its own way—potentially to the detriment of Linux users.

Peslyak speculated that the original bug reporter, Juha-Matti Tilli of Aalto University, Department of Communications and Networking, and Nokia Bell Labs, was also involved in the decision-making on the disclosure of the bug, which may have slowed things down. In addition, the semi-related FreeBSD bug (CVE-2018-6922) got into the mix since it was reported at the same time. However, it was apparently known to the participants in the distros thread(s) that the issue was, at least partly, public; that would have allowed the bug to be exploited by black hats, while leaving most of the rest of the community out in the cold. It would not be surprising to see future public "non-public" bugs be treated quite differently; it seems unlikely that Peslyak would allow this kind of situation to happen again.

Embargoes are meant to be short, at least in the free-software world. As Intel found out with the Spectre and Meltdown disclosure mess, a poorly organized, long embargo is likely to lead to leaks. It should be noted that the L1TF vulnerabilities that were announced on August 14 had been reported to Intel in early January; that too is a long time to keep a secret, but apparently lessons were learned and this set of bug fixes went more smoothly than the last. Hardware flaws affecting multiple operating systems, cloud providers, and others are likely to require a longer coordination period, but the longer flaws are hidden—and the number of people "in the know" increases—the more likely premature disclosure is.

In the case of CVE-2018-5390, though, we don't exactly have a case of premature disclosure. The bug was plainly fixed in full view (and that fix was highlighted by a well-known security researcher). There is little to be gained—and much to be lost—by "hiding" the vulnerability at that point. The horse is loose, so the state of the barn door is immaterial.

Comments (9 posted)

mount()

"Mounting" a filesystem is the act of making it available somewhere in the system's directory hierarchy. But a mount operation doesn't just glue a device full of files into a specific spot in the tree; there is a whole set of parameters controlling how that filesystem is accessed that can be specified at mount time. The handling of these mount parameters is the latest obstacle to getting the proposed new mounting API into the mainline; should the new API reproduce what is arguably one of the biggest misfeatures of the currentsystem call?

The list of possible mount options is quite long. Some of them, like relatime , control details of how the filesystem metadata is managed internally. The dos1xfloppy option can be used with the FAT filesystem for that all-important compatibility with DOS 1.x systems. The ext4 bsddf option tweaks how free space is reported in the statfs() system call. But some options can have significant security implications. For example, the acl and noacl options control whether access control lists (ACLs) are used on the filesystem; turning off ACLs by accident on the wrong filesystem risks exposing files that should not be accessible.

It turns out that turning off ACLs by accident is indeed something that can happen on Linux systems. Eric Biederman, who has been on a bit of a crusade to force changes to the new proposed mount API, has described how that can happen. In a simplified form, consider this set of actions:

Create a large scratch file and set it up as a loopback device with losetup .

. Create an ext4 filesystem on the device.

Mount that device with the noacl option somewhere in the filesystem hierarchy.

option somewhere in the filesystem hierarchy. In another spot, mount that same filesystem with the acl option.

The user who performed the second mount would naturally expect to get a filesystem with ACLs enabled — that behavior was explicitly requested, after all. But the kernel will, instead, silently apply the options used in the first mount to the second, resulting in an apparently successful mount with parameters other than those that were requested. Biederman's chief complaint is that the new API will behave in the same way; he has stated his intent to block the merging of that code until this issue is fixed.

The source of this problem is that, in the kernel, it's really only possible to mount a filesystem once. The kernel is able to create new mount points that look like independent mounts, but it's all a single mounted filesystem underneath the cover. That means that only a single set of mount options can apply. So, as Ted Ts'o explained, there aren't a whole lot of options for changing this behavior:

So if the file system has been mounted with one set of mount options, and you want to try to mount it with a conflicting set of mount options and you don't want it to silently ignore the mount options, the *only* thing we can today is to refuse the mount and return an error.

Some developers, including Biederman, are arguing that refusing the mount would indeed be better than ignoring the requested mount parameters. Andy Lutomirski said that this sort of multiple mount can go wrong in a number of ways and probably should not be allowed at all: "It seems to me that the current approach mostly involves crossing our fingers." There is, however, little prospect of changing how mount() works now, given the risk of breaking no end of administrative scripts.

That does leave open the question of whether the new API should allow this type of mount. Biederman feels strongly that incompatible shared mounts should be disallowed before the new API makes it into a kernel release, since it will become much harder to change afterward:

The fact that these things happen silently and you have to be on your toes to catch them is fundamentally a bug in the API. If the mount request had simply failed people would have noticed the issues much sooner and silently b0rkend configuration would not have propagated. As such I do not believe we should propagate this misfeature from the old API into the new API.

David Howells, the developer behind the new mount API, has stated that, since the current code does not break any existing user behavior, there is no urgent need to add restrictions. But he is looking into doing so anyway, adding options so that user space can specify whether no sharing should be allowed at all, whether it should only be allowed with the same mount parameters, or whether the current behavior should apply. Others have suggested a variant on the middle case, where the mount options would just have to be "compatible" with each other, not identical.

It turns out, though, that this limited sharing is not easy to implement either way. The core filesystem layer has no idea which mount options are compatible with each other, so there would have to be a new callback added to each filesystem implementation to answer that question. That answer doesn't just depend on the actual options; things like the security-module policy in force also have to be taken into account. It is a thorny problem, and any solution seems likely to be prone to errors. It is thus unsurprising that developers like Ts'o are asking whether it is worth the effort at all.

That is a question that has not been answered as of this writing. Assuming that Biederman doesn't back down, there will probably need to be some way of preventing shared mounts when the options are not compatible; that could come down to preventing (or at least giving an option to prevent) shared mounts entirely. Such an outcome will do little good, though, if there are enough users out there who depend on this type of shared mount. If the new API prevents them from getting their work done, they will simply stick with the old one, which will then become difficult to ever remove from the kernel.

Biederman is right in saying that, had this particular behavior never been allowed, there would not be users who are dependent on it now. But that ship sailed a long time ago. What's left now is a mess where developers are trying to figure out what the correct behavior while avoiding causing pain to system administrators. It is a bit of a mess lacking an obvious solution.

Comments (17 posted)

Hundreds (at least) of kernel bugs are fixed every month. Given the kernel's privileged position within the system, a relatively large portion of those bugs have security implications. Many bugs are relatively easily noticed once they are triggered; that leads to them being fixed. Some bugs, though, can be hard to detect, a result that can be worsened by the design of in-kernel APIs. A proposed change to how user-space accessors work will, hopefully, help to shine a light on one class of stealthy bugs.

Many system calls involve addresses passed from user space into the kernel; the kernel is then expected to read from or write to those addresses. As long as the calling process can legitimately access the addressed memory, all is well. Should user space pass an address pointing to data it should not be able to access — a pointer into kernel space, for example — bad things can happen.

The kernel protects itself against erroneous (or malicious) addresses from user space via a two-step mechanism. The first of these is the access_ok() macro:

int access_ok(type, address, size);

This function will return a nonzero value if an access of the given type ( VERIFY_READ or VERIFY_WRITE ) to size bytes of memory at address makes sense — is that region of memory in a part of the address range that user space should be accessing? On most architectures, its job is to filter out attempts to access memory that is in kernel space. If access_ok() returns zero, no attempt to dereference the given address should be made. Otherwise, once that test is passed, the second step is to use one of any of a number of primitives to actually copy memory between user and kernel space, using normal memory protections to prevent unauthorized access.

While most of the interfaces provided inside the kernel for access to user-space memory combine those steps, there are some that deliberately separate them, usually as a way to optimize several accesses happening in a row. When those interfaces are used, it is possible for the developer to forget to call access_ok() in one or more paths, leading to a situation where the kernel will access kernel-space memory using an address controlled by user space — never a good idea. That results in vulnerabilities like CVE-2017-5123 or the recent bsg problems.

Many problems that cause the kernel to try to dereference a wild pointer can be flushed out by fuzzing. But, when the kernel's user-space access functions are asked to copy data to or from the wrong place, they simply return an EFAULT status that is silently passed back to the user-space caller. Most of the time, that is the right thing to do, since the most likely explanation is a bug in the user-space program. It may have asked to copy data from a portion of its address space that isn't mapped, for example, or to write to some read-only memory.

The same thing happens, though, if user space asks the kernel to copy data to or from a random kernel-space address. Normally, the access_ok() call will catch the problem and no attempt to copy is made. But if access_ok() isn't called, the kernel may attempt to access kernel space on behalf of the user. In the absence of a focused attack, a random kernel-space address has a high probability of pointing to memory with no mapping at all, on a 64-bit system at least. The resulting page fault gets turned into an EFAULT return that is indistinguishable from any other error.

If somebody is running a fuzzing program in user space, this EFAULT return will completely mask the fact that the kernel just tried to do something bad. So developers will remain unaware of the existence of the bug which, consequentially, will not be fixed. Eventually somebody else will discover it; that somebody may not have any interest in seeing the hole closed.

This outcome is unfortunate because the kernel has all of the information it needs to know that a potentially severe security bug exists. With just a tiny number of exceptions, the user-space access functions will never be called with a pointer into kernel space. So if one of those functions generates a kernel-space page fault, something has gone wrong somewhere. It would make sense to try to draw attention to the problem so it can be investigated and fixed.

That is the conclusion reached by Jann Horn, resulting in this patch set for the x86 architecture. The objective is simple: if a user-space access function faults with a kernel-space address, and the call site has not been specially marked, a WARN() call will result. That will create a kernel oops and a traceback in the kernel log. That should attract attention in many settings, but it is especially likely to when fuzzers are being run, since they are on the lookout for just that kind of result.

The reaction to the patch set was uniformly positive; there were requests for various improvements, of course, but everybody seems to want to see this work proceed.

Kernel developers tend to be careful not to send too much information to the system log. At best, excessive chattiness can make it hard to see the messages that are actually important; at worst, it can be exploited by the user to overflow the log or as a general denial-of-service attack. But the kernel, as it is now, is masking some important information about severe bugs that it should be able to detect. That silence should soon come to an end; sometimes making a little noise is exactly the right thing to do.

Comments (5 posted)

Social networks are typically walled gardens; users of a service can interact with other users and their content, but cannot see or interact with data stored in competing services. Beyond that, though, these walled gardens have generally made it difficult or impossible to decide to switch to a competitor—all of the user's data is locked into a particular site. Over time, that has been changing to some extent, but a new project has the potential to make it straightforward to switch to a new service without losing everything. The Data Transfer Project (DTP) is a collaborative project between several internet heavyweights that wants to "create an open-source, service-to-service data portability platform".

The project has been around since 2017, but it had been flying under the radar until the Google Open Source Blog announced it in mid-July. Currently, Google is joined by Facebook, Microsoft, and Twitter in DTP, though contributions from others—both service providers and individuals—are welcome. In a nutshell, the idea is to allow complete portability of users' data:

The organizations involved with this project are developing tools that can convert any service's proprietary APIs to and from a small set of standardized data formats that can be used by anyone. This makes it possible to transfer data between any two providers using existing industry-standard infrastructure and authorization mechanisms, such as OAuth. So far, we have developed adapters for seven different service providers across five different types of consumer data; we think this demonstrates the viability of this approach to scale to a large number of use cases.

Instead of having each service provider implement mechanisms to export and import data to a number of other provider's systems, DTP aims to have providers simply do export/import to/from standardized data formats. That reduces the problem space for each provider, as each just needs to implement an "adapter" that converts to these standardized formats, which are called "data models". Instead of Facebook creating separate tools to talk to Google, Twitter, Microsoft, and others, it can implement adapters to handle translating between its proprietary formats and the data models; then any other provider can implement its own adapters and join the party.

Of course, there is a bit more to it than that. Service providers must get API keys from other providers they want to exchange data with. That does potentially allow larger services to deny upstarts the "right" to exchange data, but that is clearly not the intent. In the DTP project whitepaper [PDF], Google commits to only deny keys for services that do not meet its privacy and security guidelines, are not "providing a legitimate service to the user", or have an implementation that causes "unreasonable errors for users, or unreasonable processing requirements for Google". There is a fair amount of "wiggle room" in those guidelines, but the example use cases in the paper make it clear that keeping smaller services out is not part of the plan. The blog post puts it this way:

Our prototype already supports data transfer for several product verticals including: photos, mail, contacts, calendar, and tasks. These are enabled by existing, publicly available APIs from Google, Microsoft, Twitter, Flickr, Instagram, Remember the Milk, and Smugmug. Data portability makes it easy for consumers to try new services and use the ones that they like best. We're thrilled to help drive an initiative that incentivizes companies large and small to continue innovating across the internet.

Users will obviously need to authenticate to both sides of any transfer; that will be handled by authentication adapters at both ends. Most services are likely to use OAuth, but that is not a requirement. In addition, the paper describes the security and privacy responsibilities for all participants (service providers, users, and the DTP system) at some length. These are aimed at ensuring that users' data is protected in-flight, that the system minimizes the risks of malicious transfers, and that users are notified when transfers are taking place. In addition, a data transfer does not imply removing the data from the exporting provider; there is no provision in DTP for automated data deletion.

One of the advantages for users, beyond simply being able to get their hands on their own data, is the reduction in bandwidth use that will come because the service providers will directly make the transfer. That is especially important in places where bandwidth is limited or metered—a Google+ user could, for example, export their photos to Facebook without paying the cost of multi-megabyte (or gigabyte) transfers. The same goes for backups made to online cloud-storage services, though that is not really new since some service providers already have ways to directly store user data backups elsewhere in the cloud. For local backup, though, the bandwidth cost will have to be paid, of course.

The use cases cited in the paper paint a rosy picture of what DTP can help enable for users. A user may discover a photo-printing service that they want to use, but have their photos stored in some social-media platform; the printing service could offer DTP import functionality. Or a service that received requests from customers to find a way to get their data out of another service that was going out of business could implement an export adapter using the failing service's API. A user who found that they didn't like the update to their music service's privacy policy could export their playlists to some other platform. And so on.

It is an ambitious goal—and one that is certainly welcome—but there are still some potential flies in the ointment. One is that the data models may allow for features that some do not implement, which means that a round-trip export from service A to service B and back to service A may not restore the data to its original form. As the paper puts it:

Transferring data using canonical formats will not necessarily mitigate problems such as formatting limitations or inconsistent feature support. However, our approach illustrates that a substantial degree of industry-wide data portability can be achieved without dramatic changes to existing products or authorization mechanisms, while still providing a flexible enough platform to adapt and expand to support new formats and use cases brought by future innovation. Additionally, the DTP has been developed to increase participation by motivating Providers to build both export and import functionality into their services.

Obviously, the success of DTP is going to hinge on provider participation, which, in turn, probably depends on whether users take advantage of the functionality—at least in part. If users start transferring data and find that the fidelity of the data is not truly preserved, it could lead to either finding other ways to accomplish their task or abandoning DTP entirely. Providers will need to be on top of user requests and consider import/export as they add new features. Like social media itself, DTP will live or die on the basis of the "networking effect"; as more and more providers join in, it will help broaden the effect and expand the network even more rapidly.

There is more to DTP than presented in this overview, of course. Various data models will be grouped together into "verticals" that will cover the kinds of data that these services store, such as verticals for email, photos, or contacts. The intent is to either reuse existing standards for data interchange or to collaboratively create new ones where needed.

There is also a plumbing layer that is handled by "task management" libraries. They are described in the paper this way:

The Task Management Libraries handle background tasks, such as calls between the two relevant Adapters, secure data storage, retry logic, rate limiting, pagination management, failure handling, and individual notifications. The DTP has developed a collection of Task Management Libraries as a reference implementation for how to utilize the Adapters to transfer data between two Providers. If preferred, Providers can choose to write their own implementation of the Task Management Libraries that utilize the Data Models and Adapters of the DTP.

There are also various deployment models described in the paper. Providers can host the DTP software themselves, which seems likely to be the most popular option; an independent third-party might host a DTP portal of sorts for some subset of providers; or users can even run the code themselves, which gives them more control over the experience at the expense of having to manage it all.

As with any open-source project, DTP has a code repository; the code is released under the Apache v2 license. The code is written in Java and a number of dependencies are required in order to get started; those are outlined in the Developer Guide. One option to try it all out is to run the demo server locally using Docker. The project also has a dtp-discuss Google group, but that has been pretty quiet so far. Perhaps the Slack channel mentioned in the Developer Guide is more active. There is also an Integration Guide for providers that are looking to add DTP functionality to their site.

All in all, it looks like an interesting effort. Google has been active in the data-portability space for many years, going back to its Data Liberation Front that was founded in 2007, so it is good to see other large providers are joining in to work on DTP. How many others (large and small) join in and whether lots of users find ways to use this data portability remains to be seen. Only time will tell.

Comments (6 posted)