We live in an increasingly software-defined world, a trend which has both good and bad aspects. The recent revelation [PDF] that Volkswagen has been selling cars that have been explicitly built to defeat emissions tests highlights one of the bad ones: software control makes the incorporation (and hiding) of antifeatures easy. We are, unfortunately, going to see many other incidents like this one, even though we have long had a vision of what at least a partial solution to this problem would look like.

Cars, at this point, can be thought of as a rolling network of computers with some interesting peripheral devices, some of which may involve internal combustion technology. The details of an engine's operation have been under software control for a long time, and replacement ROMs changing a car's performance characteristics have been commonplace for nearly as long. Modern "trusted execution" technology makes the creation of such ROMs more difficult, but that turns out not to be an obstacle if the company wanting to subvert an engine's control software is the manufacturer itself.

Volkswagen's hack must have been easily done: one could, for example, have the engine-control software apply a different set of parameters when a connection to the on-board diagnostic port is detected. No need for the attachment of a separate "defeat device" (as the press seems to like to call it) and no need for an elaborate company-wide conspiracy. A single commit by a single engineer at the behest of a single manager would suffice. In retrospect, the surprising part of this story is not that somebody at Volkswagen gave in to the temptation to engage in a bit of benchmark cheating; the surprise is that far more incidents of this nature have not yet come to light.

The consequences of this cheating are severe. Emissions testing is a key part of a strategy that has significantly improved air quality in American cities over the last several decades. Subverting that testing means more poison in the air, more health problems, and more environmental degradation. It is a criminal act on a massive scale. The consequences for Volkswagen are likely to be severe — but probably not severe enough.

As many others have pointed out, VW was certainly helped by the ease with which antifeatures can be hidden in software shipped to others. When we get into a car, we trust our lives and health to a large body of proprietary control software; the source is unavailable, so we cannot inspect it for bugs, vulnerabilities, or explicit evil. Legal regimes in much of the world make a crime out of reverse-engineering this software, so we cannot try to figure out how it operates even without the source. Digital rights management (DRM) mechanisms built into the hardware make that reverse engineering even harder; this DRM may even be mandated by government agencies fearful of individuals modifying their own engine-control software.

Those in favor of such DRM requirements should bear in mind that, by some counts, VW has shipped over 11 million cars with corrupt engine-control software in it. DRM has, in the end, enabled the crime it was meant to prevent, and on a far wider scale that would have otherwise been possible.

Cars are not the only vehicle (so to speak) for software that can hide user-hostile antifeatures. In the US, the Federal Communications Commission is currently pondering changes that would make it far harder to put free software onto WiFi devices. One need not even consider the damage such rules may do to free-software development, which has been the primary source of innovation and improvement in this area, to see where such rules could lead. We cannot expect corporations, many of which show levels of restraint inferior to that of a typical toddler, to resist the temptation to put spyware or malware into their widely distributed devices sitting in privileged positions on thousands of networks. We cannot really even trust them to adhere to the spectrum rules that are the motivation for the proposed restrictions; VW's lack of respect for emissions rules has made that clear.

Similar problems exist with voting machines, Internet-connected appliances, phone handsets, fitness monitors, set-top boxes, and more. Each of these devices is, at a minimum, in a position to spy on us. Keeping governmental fingers out of these devices is a challenge in its own right, but companies will often find a strong incentive to play games of their own. Companies that are struggling, or even those that fear a downturn in the next quarter's numbers, will often give in to that incentive; when all it takes is an easily hidden patch, why not?

This will not be the first time that somebody points out that it is hard to see a solution that doesn't involve making those patches harder to hide. That, of course, means moving toward something that looks a lot like free software. If VW's engine-control software were open (with reproducible builds so that the software running in a specific car could be verified), it would have been far harder for the company to get away with violating the rules for as long as it did. Source availability is far from a guarantee that the code will be reviewed or that any reviewers will actually find deliberately introduced antifeatures, but it improves the odds considerably. Many a company might find the backbone to resist temptation if it knew that its code would be reviewed by sharp-eyed outsiders. Said companies might just find the wherewithal to clean up the code and fix some of their bugs as well.

A free-software mandate for safety-critical (and privacy-critical) software seems unlikely to happen anytime soon, alas. Decriminalizing research into how these systems operate might be a more achievable goal, but there are challenges there too; the Electronic Frontier Foundation has run into significant opposition in its efforts to get a ruling that investigating automotive software is not a violation of the anti-circumvention provisions of the US Digital Millennium Copyright Act, for example. Hidden, proprietary software gives a lot of power to those who control it; they will not give it up willingly. As a result, we can, unfortunately, expect to continue to be subjected to surveillance and criminal behavior from the devices that we think we own. We can't say we weren't warned.

Comments (113 posted)

Libinput is a library that provides shared input device (e.g. mouse, keyboard, touchpad) handling for Wayland compositors, as well as providing a generic input driver for the X.Org server. It is a relatively new part of the graphics stack and has just made its 1.0 release on August 26. Peter Hutterer gave an update on libinput at the X.Org Developers Conference (XDC) in Toronto on September 16.

He noted that libinput was introduced roughly a year ago. It is now in use by Weston, but also by GNOME and KDE, as well as X (by using the xf86-input-libinput wrapper driver). Libinput is "pretty much feature complete" at this point and provides all of the functionality that was available for X, plus some extras. It is installed by default in Fedora 22 and later, which led to a lot of bugs landing in his inbox for hardware that had not been tested. Those bugs have been addressed at this point, so libinput is now pretty stable.

He put up a laundry list of features that have been added since XDC2014; "you don't need to remember them, there won't be a test later". For the curious, though, his slides and a YouTube video of the talk are available. He would give more detail on many of those features later in the talk. One of the most important additions is a lot of documentation, he said.

udev

Libinput is now a heavy user of udev. Originally, udev was mostly just used for device discovery, but the input developers decided that if the input stack was to be fixed, it should be done at the right level, which means using udev.

That is bad news for the BSDs (which don't support udev), he said, but it makes it much easier for libinput. The udev hardware database (hwdb) is now a hard requirement, though it would be possible to emulate it in those environments that do not support it. Beyond device discovery, udev is mostly used by libinput to store device attributes via the udev rules that are shipped with it.

The type of a device—whether it is a mouse or keyboard, for example—is determined by udev. That makes for a single place in the system that manages the device types. One advantage of using udev is that custom rules can be used to override that decision. For example, the libwacom project maintains a database of Wacom tablets, along with custom rules to identify them as tablets to udev. Otherwise, tablets can look like various other kinds of devices (e.g. touchpad, pointer) based on the attributes they report.

Some of the attributes that are being stored in udev are pointer acceleration constants such as MOUSE_DPI (which maps device units to millimeters, despite the name) and two for the pointing sticks that come with Lenovo laptops ( POINTINGSTICK_SENSITIVITY and POINTINGSTICK_CONST_ACCEL ). The idea is to normalize the pointing sticks on different laptop models so they all feel the same out of the box. It is an attempt to find a "blurry middle" between the sticks that are too fast and others that are almost unresponsive as shipped.

The attributes stored in udev are strictly used internally by libinput and cannot be relied on, so they do not constitute an external API. The udev rules that ship with libinput also contain information about device models and attributes of the hardware (e.g. size and resolution). That allows devices with quirks to be identified and handled specially. Udev provides a central runtime storage location so that an attribute can be put into the database and other processes can read it out, he said.

New features

Hutterer then moved on to the new features in libinput, starting with "device groups". There are devices, such as mice for gaming or tablets, that act like multiple input devices (buttons, pointing, keys, etc.). It is "handy to know" if they are all attached to the same physical device, which is what device groups do.

All of the touchpad-handling code has moved to using millimeters. There is also some standardization in their handling. For example, movement of 2mm or less is considered "normal wobble" and does not generate events. All touchpads must have both x and y resolution, and there is a default size (69x50mm) defined for touchpad devices where the actual size is unknown.

The pointer-acceleration algorithm has been tuned, he said. It was "messy to begin with", but is now stable. There is a different mechanism for low-DPI devices (less than 1000 DPI), which are typically older mice. Pointer acceleration is also handled differently for touchpads and trackpoints. There is lots of information (including graphs) on how pointer acceleration is applied in the documentation.

A "flat" acceleration profile that simply applies a constant factor, which is mostly targeted at mice with switchable DPI settings, will also be available. That code was merged quite recently and will be available in libinput 1.1, which is due "soonish".

Touchpad gestures also have good documentation with diagrams of the gestures and information on what kinds of events are generated. For a "swipe", libinput reports a logical center, number of fingers, and changes in the x and y values of the center. Similarly, "pinch" gestures (which also support rotation) report the delta of the logical center, number of fingers, finger "spread factor", and rotation in degrees, though there is no way to use that information in the Wayland protocol as yet.

There is a slight bias toward scrolling for gestures, but the 2mm "dead zone" can make a big difference for small scrolling gestures (e.g. a 5mm scroll). Until the fingers move at least 2mm, it is difficult to determine whether it is a swipe or a pinch/rotate, so that is a problem area right now. Gestures have to be disabled for some touchpads that only have one-finger resolution or other quirks that make it impossible to detect the finger locations precisely enough.

Detecting a thumb resting at the bottom of the touchpad, which can be used to click a "button", is working pretty well. The idea is that the resting thumb is not considered part of any gestures going on elsewhere in the touchpad. In theory, the detection is based on the area and the pressure applied, but some touchpads don't provide enough information. Typically, touchpads detect pressure by the surface size (which increases as more pressure is applied), but thumbs resting at the bottom of the pad are often partially off the pad, which can interfere with the pressure detection.

The source of a scroll event is also reported now. If the scroll comes from a mouse wheel, a finger, or some other continuous scrolling source (like holding down a button and moving the pointing stick), that information is provided to the client. A "scroll end" event is reported as well, which allows clients to implement kinetic scrolling. Neither libinput or the driver implement kinetic scrolling, as it should be done on a per-widget basis, he said.

The X driver

The xf86-input-libinput driver is a thin wrapper around libinput that has "almost no logic". It simply sits between libinput and the X server, delivering events over the X interface. There are only two features implemented in the driver: button drag lock and horizontal scrolling.

Button drag lock is an accessibility feature that allows buttons to be logically down even when they have been physically released. They are considered to be up when the button is pressed again. Ideally that should be handled by the compositor, but X has no compositor to do it, so it is done in the driver.

Libinput always provides horizontal scroll events. Those events should be handled at the widget level, he said, but that is not going to happen in X anytime soon. So there is an option in the driver to discard the horizontal scroll events.

Future plans

For the future, there are several features being planned. Support for Wacom tablets is high on that list. The project has been talking about that support "for a year now". The patches are getting close to being merged at this point.

Adding support for "buttonset" devices, which have buttons and other controls but don't move the pointer, is also planned. Various devices fall into this category, including 3D mice and some tablets. There are more pointer-acceleration improvements coming, as well, mainly for touchpads and trackpoints. Finally, there is a patch pending to provide more information on touch events. Right now, these events only give x and y data, but the patch would add pressure information (area of contact, essentially) and the orientation of the touch.

Hutterer's talk gave a nice look at some of the intricacies of dealing with the wide variety of input devices available these days. Collecting all of that handling in one place and normalizing it, as libinput has done, seems like quite an accomplishment.

[I would like to thank the X.Org Foundation for travel assistance to Toronto for XDC.]

Comments (6 posted)

For the past three years, Mozilla has offered an in-browser source-code editor named "Thimble" via its web-development information and tool site, Webmaker. In early September, Mozilla relaunched Thimble on an entirely new codebase—one that began with a fork of Adobe's competing code editor, Brackets. The result is a tool that is not bound by the earlier version's education-driven limitations: users new to web development can get started with it, but it is extensible and can support features more akin to an IDE than to a text editor.

The original incarnation of Thimble launched in 2012 and was based primarily on the CodeMirror JavaScript-editing component. It supported syntax highlighting and code completion, plus in-browser previews of page contents. It is still accessible (at least for the time being), although that will surely not last indefinitely. As a part of Webmaker, Thimble placed a heavy emphasis on teaching HTML, CSS, and JavaScript—particularly through its integration with the site's tutorial material.

But, in addition to being aimed at users with little to no programming experience (which was an intentional design choice), this version of Thimble was limited to single-page projects. That effectively meant that no serious web developers would make use of Thimble, and that as users learned, they would find the tool less useful. It also meant that the members of the Thimble team were not Thimble users, leading to a gradual disconnect between the two groups.

In the meantime, two other JavaScript-based editors had each gained a significant following: GitHub's Atom (which we looked at in July) and Adobe's Brackets. Both offered a substantially larger feature set than CodeMirror—not just in terms of support for managing larger projects, but with higher-end features like linting, static analysis, parsing and tokenization, and even extensions. Just as importantly, those projects have also grown large and active user communities.

Thus, in 2013, Mozilla's David Humphrey proposed writing a Thimble replacement that used Brackets as its core. The Brackets extension mechanism could be used to reimplement the Webmaker tutorial feature, but it could also be used to progressively add features to the editor as the user needed them, thus making it useful for non-beginners.

Into the brambles

Although both Atom and Brackets are implemented as cross-platform desktop applications running on top of WebKit, Brackets was a better fit for Mozilla's purposes. First, while Atom supports HTML, CSS, and JavaScript, it has a broader focus; Brackets is designed (and used) exclusively for web development. Second, even in 2013 the Brackets development team was experimenting with a branch that would run inside an arbitrary browser window, rather than solely on the bundled WebKit engine. Finally, the Brackets extension system does not require restarts to enable or disable an extension, allowing it to adapt on the fly during a session.

Humphrey, then, took that experimental branch of Brackets and began work on a fork he called Bramble. Bramble uses Node.js to replace the original Brackets server-side component. On top of that sits Filer, a POSIX-like filesystem interface layer that uses HTML5's IndexedDB storage API. That API allows it to present a standardized filesystem on any (modern) browser engine, regardless of the underlying platform.

While the Bramble process uses Node.js as its back-end, the rewritten Thimble also includes a separate local web server just to serve a live preview of the current project. That preview (unlike the original Thimble's single-file project limit) can include references to resources and scripts from other files in the project.

In detailed blog post about the new Thimble, Humphrey also noted that the Bramble widget, the Filer filesystem, and the preview page are all isolated from one another and communicate over an HTML5 MessageChannel. That allows the file-storage and code-editing components to be served from different servers or even domains.

The public instance of Thimble provided by Mozilla allows users to publish their projects online (although saving them locally is also an option). But separating the storage and editing functions would mean that other deployment arrangements are possible, too. The documentation at the Thimble GitHub project describes running one's own instance of Thimble as "non-trivial," since it is currently tied in to Webmaker's sign-on and account services, but it appears to be possible if one is willing to invest the time.

Thimbling

Just like its predecessor, the main Thimble site is geared toward learning web development. Users are first presented with a slate of simple, pre-fabricated web projects that they can fork and modify to learn various aspects of HTML, CSS, and JavaScript. Syntax highlighting, completion, and other standard editing functions work as expected. Where Thimble moves beyond a generic editor is in its live-preview features. The live-preview pane can be switched from "desktop" to "mobile" mode to test responsive-design elements, and the DOM element one is currently editing in the editing pane gets highlighted in the preview (which should help when debugging).

The web-development tutorials are built in; users can click on the "Tutorial" header in the preview pane to access project-specific guides. In the new version of Thimble, this tutorial content is delivered via a tutorial.html file within the project, so users can easily add and share their own tutorials with others. In addition, there is pop-up help for standard tags, markup, and JavaScript particulars that the user can access by highlighting the term of interest with the mouse and typing Ctrl-K.

Fully leveraging the extensibility of the Brackets-based framework will probably take Thimble developers some time. At launch time, Humphrey listed a dozen optional extensions that are known to work, in addition to the sixteen that are used by default. Brackets extensions are written in JavaScript, so they can be fairly easily copied in from elsewhere (and there are many to choose from).

On the other hand, the new Thimble does have its drawbacks. On the usability front, it does not work in private-browsing mode, which will no doubt irk some users. Perhaps more significantly, though, it offers no form of version control. All changes to a project are saved automatically, without a mechanism for rollback. Humphrey did hint in his blog post at the possibility this could change, though: when discussing the separation of components, he noted that it could be used to tie the service into new storage layers, such as Dropbox or GitHub.

The other cautionary word, of course, is that Mozilla has a bad habit of dropping projects with little warning. Thimble is not its first foray into online code editors—that was Bespin, which eventually split off to become the ACE editor used by Cloud9. And, while Thimble has survived this major rewrite, it is hard not to notice that several other Webmaker projects were unceremoniously given the axe in recent months, including the interactive-video editor Popcorn Maker that we looked at in 2011.

On the whole, though, the newly revitalized Thimble has the potential to take off in ways that its predecessor did not. By latching on to the Brackets developer ecosystem, it can adapt to serve the needs of real-world web developers, while still providing the easy on-ramp that the Webmaker program is centered around. Time will tell if Thimble will draw a significant following. Perhaps additional work to make installation on third-party servers easier will widen its appeal; perhaps merely the ability to launch and run it within a browser will prove more appealing than downloading the 30-plus megabytes of the Brackets desktop package. Either way, Mozilla has at least developed an open-source web-page editor that serious users are likely to take notice of.

Comments (none posted)