Google announced the approved mentoring organizations for the 2015 Google Summer of Code (GSoC) program on March 3. Those organizations are the projects that will work with the 2015 batch of GSoC students, giving each student a paid internship—and giving the projects an extra full-time developer for a few months. One notable change in this year's lineup is that there is an overall reduction in the number of approved organizations—and some of the organizations absent from the 2015 list have been mainstays of previous GSoC programs. Still, more than a quarter of the included organizations have never been in any previous GSoC "class," which means that there will be the opportunity for interested students to get involved in some new parts of the open-source-software landscape.

The list of accepted mentoring organizations is available in an online spreadsheet-style document (which can be exported as a CSV file). 137 organizations are included for 2015, out of 416 applicants. The blog announcement notes that 37 of these organizations are new to GSoC.

Since GSoC has developed over the years into a reliable source for developers—more than 1,300 students participated in 2014—projects could come to view the program as a regular part of their development plans. But projects must re-apply to be mentoring organizations every year, and the 2015 list omits a few organizations that have been frequent participants. Mozilla, the Linux Foundation, and Tor have all made multiple appearances in the past (Tor often in concert with the Electronic Frontier Foundation), but are not on the 2015 roster.

The GSoC office does not publish any record of which organizations apply to be mentors but are not selected. However, Mozilla's Florian Quèze appeared to be at least a little surprised about Mozilla's lack of inclusion, and published a blog post on the subject on March 3. Quèze cited an email exchange with the GSoC office, in which he was evidently told that the decision to not include Mozilla this year was a difficult one, and that:

There's an assumption that not participating for one year would not be as damaging for us as it would be for some other organizations, due to us having already participated many times.

That assumption would, presumably, hold true for the LF, EFF, and Tor as well—all of those organizations are large and are on essentially stable footing. Furthermore, as Quèze pointed out, there are now multiple opportunities for students and other developers to contribute to open-source projects. Quèze pointed readers to a Mozilla-specific site designed to guide interested contributors toward any of several other coding opportunities connected to Mozilla projects.

A few other projects noted their lack of inclusion this year; it came up on the Tor mailing list, for example, but did not spark any discussion. Joomla was also left out, and reported in a blog post that the GSoC team said it had "decided to give newer organizations a little more visibility in the program this year."

But fixating on which organizations are not included in the 2015 mentoring group can distract from who is included. Most of the major open-source participants are returning, such as Apache, GNU, X.Org, GNOME, KDE, and a number of Linux distributions. Moreover, the list of new participants is interesting reading in its own right. It includes several well-known projects, such as CentOS, jQuery, and GNU Mailman. There are also newer projects that might not have had large enough communities in past years to run an organized GSoC mentoring program, such as the Tox encrypted instant-messaging program and the lowRISC project, which aims to produce open-hardware system-on-chip (SoC) designs.

Several university-sponsored projects are included for the first time, like MBDyn (a multi-body physics simulation engine from the Polytechnic University of Milan) and JdeRobot (a robotics development platform from Universidad Rey Juan Carlos in Madrid).

The largest group of new participants, however, seems to be from the biology and medical fields. That list includes:

As is often the case, the accepted mentoring organizations represent an intriguing slice of the wider open-source-software community. It can be quite fascinating to see just how many fields of study and areas of development now rely on open-source software.

As for the well-known organizations who were not included in this year's mentoring group, there is little cause for concern. Other projects have left and returned in years past, and although 137 organizations is certainly a hefty batch of participants, that number is still a reduction from last year's count: 190, the largest list in GSoC's ten-year history.

Plus, there are still numerous opportunities that an interested student could find to do work that benefits projects not on the list. Xiph.org, for example, is supported in its codec-development work by Mozilla, and there are multiple projects that offer kernel-related project ideas (BeagleBoard, QEMU, and the XIA networking project, to name just a few).

Now that the mentoring organizations have been announced, GSoC will soon enter the next phase: selecting the student participants from the pool of applicants. Students interested in participating can register starting on March 16. The deadline for applications is March 27, with the selections to follow a few weeks after that.

Unfortunately, there does not seem to be a robustly maintained "master list" of other open-source coding internship programs. The Opportunities page at the OpenHatch wiki comes the closest, but it relies on user contributions to keep its information and links up to date, with the usual caveats that accompany a wiki. There are a lot of coding opportunities out there in addition to GSoC, and GSoC's continued dominance on the topic in the online news space perhaps just serves to illustrate how much demand there is for programming internships such as these.

Comments (none posted)

There were several talks at SCALE 13x in Los Angeles that dealt with the practical side of embedded Linux development. One of the most well-received sessions was Stephen Arnold's look at the state of kernels and free-software graphics drivers on "open" consumer ARM devices. Arnold is a longtime Gentoo developer who also works with the Yocto project. He presented a rundown of the experiences other users can expect with ARM kernels and graphics on development boards and common single-board computers (SBCs), including the reliability of vendor-supplied drivers and the availability of Linux distributions.

Arnold's session was part of the "open hardware" track at SCALE, and one of the central questions addressed in his talk was how open many of the so-called "open hardware" ARM products really are. As most members of the community are aware, popular devices like the Raspberry Pi may be advertised as open, but the graphics chips on which they rely may have no free drivers at all. Those that do enjoy support from a free driver may have only partial functionality, or they may have an old, unmaintained driver that is difficult to port to a more recent kernel. If updating or rebuilding the vendor's kernel is impossible, after all, some of the key advantages of having an open device are all but lost.

ARM wrestling

Arnold led off with a quick tour through his personal embedded-hardware inventory, which he said he had amassed into "a collection of almost every ARM board ever." Hardware has changed so much over the years, he said, that today's boards exist in a blurry region between traditional embedded and desktop-class devices. Multi-core CPUs, more powerful GPUs, per-core floating-point units, and accelerated video processing are all becoming the norm.

Nevertheless, there are still significant differences to be found. Just within recent (ARMv7) generation products, he pointed out, boards like the Udoo and Cubox-i come with Vivante GPUs, Samsung Chromebooks and Sunxi "TV dongles" come with Mali GPUs, but the Acer Chromebook and Jetson TK1 board come with NVIDIA's Tegra K1—which is, essentially, a desktop-class graphics unit featuring up to 192 CUDA-capable cores.

Arnold also took a few minutes to explain some of the differences between the various instruction sets that users were likely to encounter on open-hardware ARM products and how that might affect writing software for them. The big divide, he said, is between devices that include the NEON instruction set and those that do not. Most ARMv7 boards include NEON, but a few (such as the Trim-Slice line of low-power boards) do not. As with ARMv6 devices (such as the original Raspberry Pi), the Trim-Slice boards use the older vector floating point (VFP) instruction set instead of NEON. The various video-driver projects often need to make use of NEON or VFP in order to provide responsive performance, so knowing which instruction set is supported is important.

The Five Blobs

Returning to the topic of graphics drivers themselves, Arnold looked at the five current GPU families for which ARM vendors are shipping binary-blob drivers: the iMX.6, the VideoCore IV, the Mali, the Tegra K1, and the OMAP SGX. There are projects working on open-source drivers for each GPU family, he said—in some cases, more than one. The Tegra family is the closest to having a complete solution, with both the OpenTegra/grate project and Nouveau support actively under development and producing working builds. Since the Tegra K1 is essentially a desktop GPU, this is perhaps not surprising.

The VideoCore IV (used in the Rapsberry Pi) has decent support with fbturbo, he said. That is what ships with the Raspbian distribution, but users may also want to look at the Weston for Raspberry Pi work that can take advantage of the GPU's hardware video scaler. The fbturbo driver also works with the OMAP, Vivante, and Mali GPUs, via the omapfb, lima, and etna_viv projects. Arnold noted that there is one other GPU family available, the Adreno, which has a free-software driver project, but he has not added the hardware to his collection yet, and thus has yet to test it himself.

If the user does stick with a vendor-supplied driver, though, there are still other factors to be considered—starting with what kernel the vendor ships. Typically, the kernels supplied by the vendor are a single release with a lot of patches applied—and the branch in question is likely to be old. Arnold said that the oldest kernel in his ARM collection is Linux 2.6.31.14, which shipped with a Fujitsu LifeBook and has never been updated. The newest kernel he has seen is 3.14. Usually, these kernels have had minimal back-porting done (if any), which means they may have security vulnerabilities, and attempting to forward-port the vendor's driver to a new kernel can take a long time.

Furthermore, the vendor kernels are often brittle: they may not use device trees, they may compile with lots of warnings (or even generate errors), or they may only compile with specific modules (or only when the kernel is built in monolithic form). Sometimes the kernel .config file is missing required configuration options, or the options included are clearly incorrect. A do-it-yourself kernel in most PC distributions is hard to mess up these days, he said, but if you change one thing in a vendor-supplied ARM kernel, you may find that USB stops working entirely, or that the firmware for the network or audio chips fails. Out of the current crop of ARM devices, he called the CuBox (with its iMX.6 GPU) and the Raspberry Pi the easiest to work with.

The other element to watch out for is the bootloader. Most vendors use their own fork of U-Boot; like the vendor-supplied kernel, it can be old and hard to work with. Some vendors that are "really on the ball" may have a newer U-Boot branch, since there has been considerable work upstream to consolidate the many U-Boot branches. If so, the user can take advantage of more recent bug fixes and optimizations.

Vendors tend to have only one supported bootloader configuration, he said, with the most common being booting from an SD card with two partitions (one for / and one for /boot ). Manually changing this configuration is possible, but users who head down that path should be on the lookout for unusual boot options in UEnv.txt or boot.scr files. He also cautioned that he has seen different bootloader options even from the same vendor on two products that use the same system-on-chip (SoC), so an abundance of caution is warranted.

Status reports

Arnold then gave a rundown of the differences between the mainline kernel and the vendor kernels supplied for common ARM devices. Device trees are the main difference; whether or not a working device tree is available for a recent mainline kernel varies from device to device, and improving that situation is the subject of ongoing work by the community. He advised users looking to deploy a mainline kernel on their ARM device to check the Linux on ARM page at the EEwiki to see if there is a recent how-to guide. Robert Nelson from DigiKey maintains patches for a long list of vendor kernels and U-Boot forks at that wiki.

The Linux graphics stack itself is also in flux, Arnold said, with the migration from X to Wayland, the legacy driver model going away, and fluctuations in the OpenGL realm all complicating factors. He reported on his personal tests performed on his ARM hardware collection, calling out the Tegra 20/30 and Vivante hardware as working well with recent X.org releases. OMAP/PowerVR devices and Mali-based devices are also usable, although they depend on TI's open-source framebuffer code and fbturbo, respectively. He has recently given up entirely on the Efika MX product line, he added, which only works with "ancient kernels and closed-source drivers."

Even if the user decides to update from the vendor-supplied kernel and graphics drivers, it may be difficult to find a fully working Linux distribution for any particular ARM board. Arnold's final status report was a look at the distribution offerings for the various open-hardware ARM products in his collection. Typically, he said, the vendor will supply an Android OS option plus one other, traditional Linux distribution for each device. These distributions, of course, are typically limited to the vendor's legacy kernel and binary-blob drivers.

But some ARM products stand out from the crowd for their ability to support multiple distribution options. The Raspberry Pi offers the most, via its "New Out Of the Box Software" (NOOBS) system, which includes six distributions. The BeagleBone and BeagleBone Black officially support several flavors of Yocto and OpenEmbedded Linux, as well as Debian, Ubuntu, and Gentoo. Various Chromebooks support multiple mainstream distributions (Ubuntu, Debian, Gentoo, Arch, and Fedora being the most common). Udoo boards support an Ubuntu variant (udoobuntu), Android, OpenElec, and occasionally other distributions as well.

Pulling yourself up by your bootstraps

Ultimately, however, most users get interested in bootstrapping their own distribution at some point. For those people, Arnold said Yocto was the most reliable option. If there is a vendor board-support package (BSP), he said, Yocto can run on it. Yocto can also run on a modest desktop Linux machine, building a full image and handling build dependencies automatically. If there is not a BSP for your board, he said, you can probably create your own for Yocto.

Gentoo is the next option; Arnold maintains the Gentoo overlays for most of the ARM devices, and new stage3 builds are created every few weeks. For newcomers who may not be ready to migrate their desktop to Gentoo in order to do a native Gentoo build, there are other options, such as performing the build in a virtual machine. Last but not least, Debian and Ubuntu are always available: "if you can boot it, you can debootstrap it," he said.

In the brief question-and-answer period at the end of the session, Arnold agreed with one audience member that the pace of new ARM hardware devices was becoming problematic. There are new chipsets every year, but it typically takes the free-software community two or three years to fully implement software support. In the meantime, users are often left with just the vendor-supplied kernels, graphics drivers, and distributions to work with. So users can "run Linux" on their open-hardware ARM boards, but the situation is far from ideal.

Comments (9 posted)

The CANBus Triple is a Kickstarter-funded open-hardware device intended to let car hackers read, write, and potentially modify the messages sent over their vehicle's Controller Area Network (CAN) bus. Although there are other hardware solutions for interacting with CAN traffic, the CANBus Triple offers some unique advantages. It is low-cost, the schematics and designs are all freely available, and it can serve in some roles that other, mass-produced devices cannot. That said, the software side of the project still has some catching up to do compared to the otherwise nice hardware.

The device itself is the work of Derek Kuschel, an independent hardware hacker from Detroit. In 2013, he built and sold a batch of prototype CAN bus devices after other readers on the MazdaSpeed discussion forum expressed interest in the personal CAN-hacking projects he had posted about. Based on the success of the prototype devices, he launched a Kickstarter campaign in mid-2014 intended to ramp up production. The fundraiser beat its target by a comfortable margin in September, and in February 2015, the first generation of production devices were mailed out to supporters—myself included.

The CANBus Triple takes its name from the fact that it provides three independent CAN bus channels (each including a controller chip and a transceiver). It also includes an Atmel ATMEGA 32u4 microcontroller, which allows it to be programmed with Arduino software tools for use in standalone mode, plus a USB serial port and a Bluetooth Low Energy module. The Bluetooth module does not offer sufficient bandwidth to log all of the CAN bus messages that a modern car might produce, but it would suffice for the Triple to be paired with a smartphone or tablet to, say, monitor specific message types or to send commands to the microcontroller's software. The USB port is capable of a high-speed connection to a laptop or other computer, and offers significantly more interaction possibilities.

The hardware included in the device is significantly more powerful than most of the other CAN bus peripherals that are available to consumers. For comparison's sake, USB peripherals can be difficult to find for less than 50 or 60 Euros, and that price range generally only provides a basic serial connection for a single CAN bus (see this vendor for one example). Modern cars often have at least two buses: one for high-priority components like engine sensors and braking modules, and one for monitoring events from low-priority components like the door locks and climate control system. At the other end of the spectrum, there are multiple Arduino shields that offer a CAN controller and transceiver (sometimes more than one), but those devices are difficult to make much use of in a non-Arduino software stack. At best, they can be adapted into logging tools, but few developers seem to succeed in doing much more.

To use the CANBus Triple, one needs the Arduino IDE and a copy of the project's Arduino code. This includes the configuration files necessary for the IDE to compile sketches and upload them to the device, plus a "basic" Arduino sketch (i.e., program) that lets the user connect to the device with a serial console and watch some CAN traffic. I had no trouble getting the Arduino sketch to compile and upload, and the device responded to the basic serial commands it is supposed to.

That said, there is more complexity to actually getting the device to do anything useful. The easiest way to connect to a car's CAN bus is through the OBD-II diagnostic port, which has a standard wiring configuration. Out of the box, the CANBus Triple comes with a custom cable so you can plug the device right into a OBD-II port. Plug in the Triple, start up the car, and one can see CAN bus messages over the serial connection.

For now, that is about the full extent of the device's functionality. Kuschel is working on a Cordova-based app that can be built for iOS, Android, and desktop systems (at least, any system for which Node.js is available). But the app is not yet in a workable state.

There is, however, a modest middleware layer in the Arduino code that lays the groundwork for more interesting development. It includes timers, hooks for catching and acting on specific CAN messages, and a channel-relay function to copy a CAN bus message heard on one of the CAN buses out onto one of the other CAN buses, among other things. The documentation here is currently quite sparse. Kuschel can hardly be blamed for that; as recently as two weeks ago he was still assembling, testing, and mailing out CANBus Triple units to Kickstarter supporters.

But there are a few developers on the discussion forum who have popped in already to announce that they are working on some carmaker-specific software using the CANBus Triple. Alternatively, Kuschel has released a more feature-rich Arduino sketch that is tailored to Mazda cars. Both of these developments highlight one of the challenges of car hacking: almost every manufacturer may use CAN bus, but there is such a wide array of diverse messaging formats that most software can quickly become vehicle-specific.

Nevertheless, I remain optimistic that CANBus Triple has a bright future. The vast majority of CAN-related hacking projects are simple data-logging or monitoring tools that use an Arduino; there has always been a large price-and-functionality gulf between those projects and the expensive USB CAN adapters that a full-fledged Linux box can, theoretically, do more with. The CANBus Triple basically sits right in between: it has an on-board Arduino-style microcontroller, but it has a USB serial port, too.

Furthermore, the Triple is the only device I am aware of that has the hardware configuration required to intercept, modify, and pass on CAN traffic. That functionality is the key to doing innovative things in the automotive environment—especially in the aftermarket arena. Without it, a car computer can eavesdrop on other CAN-connected components' messages or generate its own, but it cannot really override the car's existing modules in their factory configurations.

The possibilities for modifying CAN traffic are literally endless. From simple adaptations like having the audio unit raise the volume level when the car is traveling at a higher speed to more ambitious functions like having an electric car intelligently power-down non-essential components when the battery is running low, modifying CAN messages is a powerful tool.

The device has a few quirks. For example, the included cable (with a standard diagnostic-port connector) is only wired up on two pins (6 and 14, which are standard CAN pin locations), which means that only one of the three CAN buses is reachable. The other pins can be soldered on, although in my case, whenever I popped open the connector's casing, it seemed like one of the other wires had come loose from its pin. Such are the ins and outs of small-production hardware, though.

On the whole, the CANBus Triple is impressive because it fits right into a gap that no other product is addressing. The fact that it is open hardware and it is built on entirely open-source software makes it all the more likely that the car hackers in the community will pounce on it to do something interesting. And it's hard to beat the snappy orange color scheme, either.

Comments (25 posted)