Licenses are an integral part of the software world. Free and open-source software uses its licenses to outline the ways in which the code can be used and distributed—proprietary licenses do the same, but generally in far more restrictive ways. A novel argument in a court case between the US Department of Justice (DoJ) and Apple seeks to turn this common practice into a morass. Essentially, the DoJ wants to compel Apple to bypass the iPhone lockscreen on a particular phone because the company legally owns the software running on the device.

The case centers around the government-confiscated iPhone of a defendant in a criminal case. The defendant claims he has forgotten the passcode to get into the device, so the DoJ turned to Apple as it has in at least three other cases. It seems that it expected the company to roll over and rat out its customer (as it had in the past), but Apple, to its credit, is fighting back.

The judge in the case is clearly skeptical of the DoJ's claims that the "All Writs Act" from 1789 is being properly used to compel Apple to unlock the phone. He asked Apple whether unlocking the phone was an unreasonable burden. The company responded [PDF] that it would be for a few different reasons. It is technically feasible to extract some information from the phone in question, the company said, but not for current and future iPhones due to changes in iOS.

Unlocking this particular phone would likely result in more government requests of this sort, Apple said. In addition, while the time to do so would not be a substantial burden, there would also be the need for Apple employees and lawyers to explain what had been done to the court and to testify that the information had been extracted from the phone in question. Beyond all that, though, there is a burden on the company's reputation:

Forcing Apple to extract data in this case, absent clear legal authority to do so, could threaten the trust between Apple and its customers and substantially tarnish the Apple brand. This reputational harm could have a longer term economic impact beyond the mere cost of performing the single extraction at issue.

The DoJ shot back with a response [PDF] to Apple's claims. It predictably downplayed Apple's claims that unlocking the phone would be burdensome and noted the three other times that Apple has followed an All Writs order to unlock a phone in the past. But the DoJ goes further than that.

When the judge asked Apple about the DoJ request, he specifically noted a similar case but pointed out that this situation was different since Apple doesn't own the iPhone in question. But the DoJ is trying to route around that by asserting that, while Apple doesn't own the phone, it does own the software that runs on it:

Apple wrote and owns the software that runs the phone, and this software is thwarting the execution of the warrant. Apple's software licensing agreement specifies that iOS 7 software is "licensed, not sold" and that users are merely granted "a limited non-exclusive license to use the iOS Software." [...] Apple cannot reap the legal benefits of licensing its software in this manner and then later disclaim any ownership or obligation to assist law enforcement when that same software plays a critical role in thwarting execution of a search warrant.

If the judge finds that argument compelling, it opens up a huge can of worms. As Cory Doctorow put it:

To my knowledge, this is an entirely novel argument, but as I say, it has far-reaching consequences. Virtually every commercial software vendor licenses its products, rather than selling them. If the DoJ establishes the precedent that a [company's] continued ownership interest in a product after it is sold obliges the company to act as agents of the state, this could ripple out to cars and pacemakers, voting machines and tea-kettles, thermostats and CCTVs and door locks and every other device with embedded software.

The judge is clearly critical of many of the DoJ's arguments and has been for a number of years, so he may not buy the reasoning at all. But that doesn't mean some other judge may not feel differently at some point. One would hope that reason would prevail, but we have seen many court cases over the years that seem to defy logic.

The owner of the phone would also seem to be an avenue for getting the passcode, but the DoJ doesn't want to go there. There are substantial Fifth Amendment (the right to not self-incriminate) questions in trying to compel a defendant to disclose passwords and the like. The DoJ clearly doesn't want to risk getting the phone data but being unable to use it at trial:

Compelled decryption raises significant Fifth Amendment issues and creates risk that the fruits of the compelled decryption could be suppressed. [...] The government should not be required to pursue a path for obtaining evidence that might lead to suppression.

An interesting question arises for those who are shipping devices with mostly open-source software. Given that the device maker generally doesn't own the software in question, who can be compelled to work with the government to decrypt data on an Android device that uses, say, ext4 encryption? There is an owner (or owners) for the free software, but it is hard to see what they could be compelled to do.

Free software has some other advantages, of course. The courts can look at the code and verify that there is no back door, for example. Also, the argument about using licenses to profit without being willing to work with legal authorities doesn't really apply in quite the same fashion.

All of this posturing may be rendered moot by device makers that don't leave themselves back doors to access the data on the device. If only the owner can unlock the data on a device, all the compulsion in the world won't be useful—unless it is done to the user, which is dicey on constitutional grounds. There is the worry, of course, that device makers could be compelled to add back doors to their products, which is something certain elements of governments have been hard at work on. It will be interesting to see where all of this leads.

Comments (34 posted)

Most free-software conferences do an excellent job when it comes to providing a program of informative talks from project representatives and developers. Far fewer succeed at attracting many sessions from end users exploring novel or otherwise interesting uses of free software, but GStreamer Conference is among the events that do. The 2015 edition of the conference was no exception. Among the talks about applications of GStreamer, the stand outs included a session on acoustic location triangulation and a talk about streaming zoom-able ultra-high-definition video through a clever use of tiling.

Audio triangulation

Jan Schmidt spoke about location determination with GStreamer. It should be noted, of course, that Schmidt is a GStreamer maintainer, but the project he covered in this session was entirely an extra-curricular exercise. In fact, he started the talk by explaining that the idea struck him just a few months ago, and he only began working on it after his talk proposal was accepted.

The idea is straightforward. If multiple microphones are placed in known positions in a room, a program can calculate the position that a sound originates from by measuring the relative time that each microphone records the sound—after adjusting for any processing network-transmission delays, that is. The GStreamer 1.6 release added high-precision network clock synchronization and the ability to report network statistics, Schmidt said, so it occurred to him that a network of GStreamer client applications might now be usable as an acoustic triangulation system.

The specifics are important, of course. The speed of sound is 340.29 m/s, or about 34 cm per millisecond. GStreamer 1.6's transmission overhead on a WiFi signal is about 2 ms, which should result in an accuracy of under one meter. At the very least, he said, that would be good enough for Internet of Things (IoT) usage, such as allowing the user to speak voice commands and have appliances decide by proximity what device (say, a lamp) the user is speaking to.

To test the idea, Schmidt adapted an earlier personal project: Aurena, his GStreamer-based whole-house audio distribution system. Aurena used the Real Time Streaming Protocol (RTSP) to play audio from a server simultaneously on multiple client devices. But to support sending microphone audio from the clients back to the server, he had to write an RTSP recording element. The server handles clock synchronization with the clients and sets up the audio-processing pipeline.

Since the devices he had on hand to test with (mostly Android phones) varied as to whether they recorded mono or stereo sound, some processing was required on the server to normalize the input. But the "magic correlation step," he said, had already been solved by other people. To perform the triangulation, he used a package called ManyEars that was developed by robotics researchers.

It was at that stage, however, that he began to run into difficulty. ManyEars itself was not an issue, although it is designed to work with eight microphones precisely placed at the vertices of a cube. GStreamer had no problem combining the client audio streams into the eight-channel signal expected by ManyEars. And it is certainly possible to transform the geometry of a different microphone arrangement (at least in non-pathological cases) and map the results produced by ManyEars into another room shape, if one is willing to do the math. But, as it turns out, Android's audio layer thwarts the plan by introducing random delays and latency that GStreamer, at present, cannot adjust for. In his tests, the Android-introduced delays varied between 30 to 100ms, and were neither predictable nor controllable. Furthermore, some Android devices appear to randomly drop audio packets before they are delivered to the GStreamer client application.

Schmidt decided to introduce a calibration step in an attempt to work around the random-delays problem. The tool, which he demonstrated with multiple Android phones set up around the session room, plays an audio tone from each device, in turn, and records the output on all microphones in order to measure the delay. For now, he is not sure if this approach will pan out, since the Android audio stack's delay factor is so unpredictable that it may not be possible to know for certain that the test sound was played on time. Even a few milliseconds of uncertainty would be enough to destroy the accuracy of the positions calculated.

That said, the general approach may still be useful for non-Android devices, and there was considerable interest from the audience in seeing where the project heads next. In an era when more and more "smart" household devices start listening to us, perhaps GStreamer will allow developers to do something useful with all the microphones—apart from relaying information through the cloud to advertisers and service providers.

Tiled streaming

Arjen Veenhuizen from the Dutch research institute TNO presented the session about tiled video streaming. The root problem that his development team is out to solve is how to cope with the disparity between the ever-higher resolutions offered by video content and the limited capabilities of mobile devices—which make up a sizable percentage of the screens to which video is delivered.

TNO has been working on a solution that splits a source video stream into a set of tiled sub streams, any one of which can be delivered separately to a client device. The example that Veenhuizen gave was of a live sporting event like a track meet; viewers are likely not to want to see the stadium-wide feed, but would prefer instead to watch a high-quality feed of just one portion of the field. That way, each user can get an HD-quality video, but have the freedom to zoom out or in on a different portion of the source stream at will.

The solution that TNO has developed (which it is testing with an arena in Amsterdam) uses GStreamer to stitch together several camera images into a seamless, 6K video stream, then divide the total camera area into multiple "region of interest" (ROI) streams. As it is currently deployed, each camera at the arena is attached to an H.264 encoder; those streams produce 600-800 Mbps (as compared to 3Gbps for the raw camera video).

The camera streams are sent over a dedicated Real Time Messaging Protocol (RTMP) channel on a fiber link to the distribution setup running on a public cloud provider. There, streams are stitched into the "overview" stream that shows the entire arena and each ROI stream is stitched together from the relevant camera streams. This step is performed in parallel by a pool of tiling processes (managed by a master process). In addition to tiling the video stream spatially, he noted, the streams are also split up temporally into three-second chunks. Client machines, such as phone as tablets, first tune in to the overview stream, then the user can click to select a sub-stream.

The number of tiles varies; Veenhuizen said the team has worked with anywhere from 21 to 90. At the moment, the system is dominated by CPU-bound processes; the team uses 32-processor cloud instances—which is the maximum available. It would be better to use GPUs for the stitching and tiling, he said, but so far no cloud provider offers such a service.

He reported that the project has shown GStreamer to be "extremely stable" on such a large-scale project—and seems to suffer no performance hit when being run inside Docker containers. Streams will run for days at a time uninterrupted, producing multiple terabytes of video. In addition, GStreamer's scaling and cropping operations have proven to be high-performance; on average the entire processing pipeline only introduces four to five seconds of latency.

At the moment, though, the team is working on implementing a GStreamer-based client application for the mobile devices, which is proving tricky. The requirements are steep even on the client side: users want fast switching between different ROI streams, interactivity, and frame-accurate synchronization between the available streams.

Looking forward, he said the team hopes to get the video format and streaming protocol standardized in MPEG DASH (Dynamic Adaptive Streaming over HTTP). Eventually, they also want to get the process working on the next generation of video codecs. The primary target is H.265, although that codec is renowned for being substantially slower to encode than H.264, which presents a practical problem for a project already maxing out the machines available from a cloud provider.

At first glance, it might seem like Schmidt's audio-triangulation project and TNO's ultra-high-definition video streaming project have little in common. One is a single-handed, hobbyist effort, while the other is a large-scale cloud-computing–based service. But it is interesting to note that both have to deal with the realities of realtime network streaming, and both are running into problems with GStreamer application development on mobile device platforms. No doubt the GStreamer developers picked up on the issues of importance as well—mobile support was, in fact, one of the "hot topics" raised by Tim-Philipp Müller in his opening state-of-the-project session. And gathering users with diverse use cases in the same room as the developers is always a wise first step toward solving problems.

[The author would like the thank the Linux Foundation for travel assistance to attend GStreamer Conference.]

Comments (8 posted)

On the first day of the Tokyo OpenStack Summit, there was a potentially contentious topic discussed in the Design Summit: should OpenStack adopt a single distributed lock manager and, if so, which should it be? The cross-project session was broken up into two parts, the first of which targeted the first question; the second would then look to the implications of that decision. The discussion and decision provided an interesting look into some of the inner workings of the project.

Hot on the heels of the October 15 release of OpenStack Liberty, the developers gathered in Tokyo October 27–30 to determine what would be in the next release, Mitaka, which is due in April 2016. But the summit is also an opportunity to look at longer-term changes that will come in releases over the next year or two. Mike Perez, who is the cross-project developer coordinator at the OpenStack Foundation, moderated the two sessions that, apparently, were not quite as contentious as perhaps was feared.

The overall problem has been summarized in a document: "Chronicles of a distributed lock manager". There is a need for various OpenStack components to perform some operations atomically, which generally means some kind of locking solution is required. Because OpenStack is a distributed system, though, a distributed lock manager (DLM) is needed. Currently, each sub-project has dealt with the problem on its own, typically by storing a lock in its database.

The proliferation of these ad hoc solutions is becoming a problem for the overall project. In addition, there are other sub-projects that would like to have some kind of locking, but would rather not create their own. That led to the idea of choosing a DLM to ship with OpenStack that sub-projects could rely upon being present. That immediately leads to a second question: which?

There are various options for a DLM that are laid out in the Chronicles document. As might be guessed, each has its strengths and weaknesses. The discussion mostly focused on three: Apache ZooKeeper, etcd, and Consul. Each brings additional features that will be of use to some sub-projects, such as leader election and service discovery.

There was some discussion of various sub-projects and their requirements, such as for the Cinder block storage component, the Ironic bare-metal provisioning handler, and the Heat orchestration system. There were obvious parallels between each project's needs, with many needing service discovery and leader election as well as shared locks. The Chronicles document looks at even more of the sub-projects; there were a few more added to the Etherpad notes from the sessions.

One of the main questions is whether operators of OpenStack clouds would "vomit" if they were required to install a specific DLM. An informal straw poll of those in the room found that each of the major options had some opposition. While ZooKeeper has the most features, there were a number of concerns around it, largely because of its implementation language: Java. There are operators who do not want to add the Java Virtual Machine (JVM) into their operations, so the decision comes down to a "Java vs. non-Java" question (both etcd and Consul are written in Go).

But fair locks (ones that prevent starvation) can only be implemented with ZooKeeper, so there was a question about whether that feature was needed. So far, at least, there are no sub-projects that require fair locking, but it certainly seems like something that may be needed down the road. Restricting the project to a solution that cannot provide fair locking struck some as short-sighted. Others noted that there would be a chance to re-address the question in six months, since only one or two projects (likely Cinder and, possibly, Ironic) would have switched to anything new.

There was a suggestion that instead of choosing one DLM, the project could adopt an abstraction layer, perhaps one based on the optional OpenStack Tooz library. That would allow those who wanted a different DLM to run it with a driver to present the common API. There was a mixed reaction to that idea as some clearly felt that an opinionated choice should be made. OpenStack Foundation Director of Engineering Thierry Carrez said that if one DLM was picked, the overall sense of the room seemed to be for ZooKeeper.

But running ZooKeeper on the JVM from the OpenJDK project was of concern to some. Most run ZooKeeper with the Oracle JVM, so there may be problems that occur with OpenJVM—problems that might not be addressed quickly by the ZooKeeper upstream. Running the Oracle JVM is a non-starter for some operators, however. In addition, ZooKeeper isn't really a DLM, but is a toolkit for building a DLM, one attendee noted, which may make it hard for others to replicate the DLM that was built and tested by OpenStack.

On the other hand, though, maintaining an abstraction layer for each DLM choice would be a burden on the project. In addition, there are going to be quirks for each one and it would better to design around the quirks of one, rather than three (or more). But others noted that OpenStack would likely only build one (for ZooKeeper) and that others would need to fill in the abstraction layer for DLMs of interest to them.

There is an established pattern in OpenStack of having abstraction layers and being inclusive, one attendee said. But there are major advantages to having at least one DLM available, rather than having zero as it is today. So it makes sense to focus on having at least one DLM available.

Carrez said that he had come into the session thinking that a choice for a single DLM should be made but, at the end, he was convinced that an abstraction layer was the right approach. That seemed to be agreeable to most in the room (who represented multiple sub-projects and project constituencies). It was also agreed that the default would be ZooKeeper.

After a short break, with some participants having to head off to other sessions, the implications of the decision to have an abstraction layer were discussed. First off, there were some thoughts presented about how components like Ironic could be upgraded in place from their existing database locks to something DLM-based, with minimal downtime. The basic problem is in how to migrate an existing lock from the database to the new scheme without losing track of it during the upgrade phase. Ironic developers seemed confident they had an approach that would work.

Using Tooz as the DLM abstraction layer seemed the obvious approach, but there are some problems with the existing drivers for Tooz. For example, the database driver can't actually provide what the Tooz API promises, so it needs to be removed. A SQL database cannot handle some of the DLM failure modes, so it would look like it was providing DLM functionality, when it actually cannot. Similarly, the interprocess communication (IPC) driver may not be able to faithfully implement the API.

There is a question of how to decide which drivers will be accepted into Tooz. The concern is that some DLMs might have drivers written, but that the underlying DLM cannot truly fulfill the requirements in a scalable, production-ready fashion. They might be fine for testing or for small deployments (e.g. single node), but not ready to be used in large-scale installations. Having a driver included into Tooz would be an indication that operators can deploy using that DLM, which is something that the project wants to avoid.

In the end, the "production ready" criterion will be used to determine which drivers are allowed in, even though that term is somewhat amorphous. It was agreed that there would be a discussion with those who develop alternate DLM drivers as part of the acceptance process to determine whether the DLM is truly meant for large-scale deployments.

The meeting broke up with a solid conclusion and one that seems rather different than the sense of the room early on. As with other OpenStack components, the DLM piece will be handled with an abstraction layer that allows for multiple choices underneath. Like other OpenStack plugins and components, a candidate will need to pass all of the tests and have at least two maintainers to handle its care and feeding before it can be considered for inclusion. For Tooz drivers, though, the production-oriented question will need to be discussed as well.

All of that means that OpenStack sub-projects will be able to have a hard dependency on the presence of a DLM, which was, essentially, the goal set out in the Chronicles. Given the contentious nature of choosing a single one, however, it should perhaps not be a surprise that the project opted for the inclusive choice. That is very much in keeping with the OpenStack way and part of what has led to its success, as one participant noted.

[I would like to thank the OpenStack Foundation for travel assistance to Tokyo for the summit.]

Comments (9 posted)