This week's edition starts off with the intersection of two of everybody's favorite topics: license compliance and formal specifications. In particular, License compliance in the open-source supply chain covers a Free Software Legal and Licensing Workshop talk on the OpenChain project, while Inside the OpenChain 1.1 specification looks at what OpenChain is actually recommending.

Also from the Legal and Licensing Workshop: Free-software concerns with Europe's radio directive reports from Max Mehl's talk on whether the European Union's radio-equipment directive is a threat to our ability to run free software on devices that include radios.

Kernel coverage this week consists of three articles:

4.12 Merge window part 2: there is a merge window in progress. As per tradition, we slog through the stream of changesets and point out the most interesting new features added in the current development cycle.

Grsecurity goes private: some thoughts on why the Grsecurity patch set became subscriber-only.

A farewell to set_fs()?: one of the kernel's oldest internal APIs is also one of its most dangerous. Might we be about to get rid of it?

In addition, of course, the Brief items and Announcements pages record what's been happening in the Linux and free-software community. Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (4 posted)

The supply chain in the open-source world is lengthy and global; it also suffers from compliance problems with the GPL and other licenses. The OpenChain project was created to help the companies in the supply chain with their compliance. At the 2017 Free Software Legal and Licensing Workshop (LLW), OpenChain program manager Shane Coughlan described the project, some of its history, the release of version 1.1 of its specification, and more.

For quite a few years, there has been a belief in the community that license compliance is "something that needs to be urgently addressed", Coughlan said. Many in the room have done lots of work to help address that with information on how to comply, how to train employees on compliance, and so on. The community has worked out how to adhere to the licenses properly, but that has not yet taken hold in the supply chains providing open-source code for various devices.

The last great barrier to ensuring license compliance throughout our industry is these supply chains. Three years ago, Dave Marr of Qualcomm brought it up as a problem that needed to be solved. The companies in the middle of the supply chain need help, Marr said, so perhaps a project to do so would make sense. Everyone liked the idea, Coughlan said, but for a while nothing happened. That is common in our industry; we sometimes take a while to "mull over the approaches", he said.

One way to approach it would be to make an enormous list of all of the "stuff you need to do to be super awesome at compliance"—targeted at both small and large organizations. That is something of an academic approach and one that is likely to be looked upon in horror by small and medium-sized companies. So that was more of a thought experiment.

Another way would be to define the overarching processes that an organization needs to follow with regard to open source. It would establish the baseline processes that need to be followed for open-source software as it comes in, is used, and goes out to customers. In addition, material supporting these best practices would need to be provided along with a way for organizations to self-certify that they have the appropriate processes in place. That is the approach that OpenChain has taken.

The idea is to start with the minimum needed and then build on that, Coughlan said. Over the last one and a half years, people have been working on OpenChain and, at this point, OpenChain is a "refined project". In October 2016, the 1.0 specification was released; since then, there has been feedback on the specification and the project has reworked and polished it for the 1.1 release, which was made a few hours before his talk. At this point, the project fully "breaks cover"; it is something that is ready for mass-market consumption now.

The specification [PDF] is augmented with open training materials. OpenChain 1.1 has also overcome the barriers to online self-certification, he said. That allows companies to quickly check what kinds of questions they need to ask themselves to ensure they have the right processes in place.

This is not just some esoteric exercise, Coughlan said, it is helping to solve a real problem. But there is a larger challenge, that goes beyond adhering to one license or a particular compliance regime; there is a need to build trust within the industry. Companies need to have the sense that everyone is playing by the same rules. If other companies are complying with the OpenChain specification, especially suppliers, that can help provide the trust.

The specification has been built by a large team, he said. There were comments from more than 100 people. The mastermind behind the specification has been Mark Gisi of Wind River Systems. Gisi created a realistic baseline that can be trusted throughout the industry. Miriam Ballhausen is the mastermind behind the online self-compliance mechanism; she took an earlier questionnaire and turned it into a web app.

The final piece of the OpenChain puzzle is the creation of a training program, an effort that Coughlan has been the chair for, but many others have "devoted lots of time" to it as well. The materials are being translated into Korean, Japanese, and Spanish, with more languages planned. The project originally got slides from multiple companies that were combined into a "Frankenstein deck", but the slides have been improved and condensed into something more cohesive. In addition, the slides can be converted into other formats more easily now; originally they were only available as PowerPoint and PDF files, but now there is a beta version of the slides in LibreOffice format.

OpenChain was "brought to life" at LinuxCon Europe in 2016 and organizations like Wind River adopted it. That helped the project learn what was needed for the 1.1 version, which is "ready for mass adoption". The project likes to make a splash with its announcements, Coughlan said; at the time of its release, OpenChain 1.1 had already been adopted by Siemens, Qualcomm, Pelagicore, Wind River Systems, and, on the previous day, by Harman. He is enthusiastic about the project because it solves a problem "that we haven't been able to solve".

He noted that the GPL compliance book that he and Armijn Hemel wrote marked something of a finishing point for him on the subject of compliance for individual companies. He is now moving on to compliance in the global supply chain.

Working on license compliance is not only for our own self-interest, but to help these companies use open source properly. If there are hiccups with compliance, it indicates that companies have not fully realized the value that open source brings. With best practices information, training programs, partners, and a community, OpenChain will help them get there.

There are over 150 people on the mailing list, but there is a need for more people to get involved. OpenChain is not a typical project, Coughlan said; it has a more global focus than many others. That global nature is backed up by things like efforts to translate all of the project materials to Chinese, so that it is not restricted to English. In addition, planning phone calls are not done only in US-friendly time zones; one call per month is scheduled at a time convenient for Asian contributors, while the other is scheduled for contributors in Europe and the Americas.

In summation, Coughlan said that OpenChain is going to be a beneficial part of the open-source ecosystem. He was asked about license translations by an audience member, but said that was outside the scope of the project. OpenChain is not license-specific, it is at a higher level. The idea is to ensure that companies have the right approach on bringing open source in, working with it internally, and then in shipping it to customers. In some ways, OpenChain is like ISO 9001; it ensures that the right processes are in place, but leaves specific decisions, like which licenses to use, up to the companies.

[I would like to thank Intel, the Linux Foundation, and Red Hat for their travel assistance to Barcelona for LLW.]

Comments (3 posted)

LWN recently covered a conference session on the OpenChain project and its recently released v1.1 specification [PDF] . The talk, however, was remarkably short on details on what is actually in that specification. Perhaps most LWN readers were content with that state of affairs, but your editor decided to take a closer look.

Reading specifications can be hard work; it's like experiencing all those hours of grim committee meetings in a single concentrated dose. In this case, though, it turns out that the entire document is a mere twelve pages long, including title page, table of contents, definitions, etc. It shouldn't take more than one extra pot of coffee to get through it, and a single beer might be enough to recover afterward. There was, in other words, little excuse for not wading in.

The introduction clarifies a crucial point that hasn't necessarily been all that well explained elsewhere: why this specification exists in the first place. It comes down to supply chains. Some companies obtain free software directly from the development repositories and distribute it to their customers, but far more of them obtain that software from some other company. There is already plenty of evidence that compliance problems up the chain can create misery for companies further downstream. If, for example, a company ships a home router running a non-compliant Linux distribution obtained from a supplier, that company may find itself facing an enforcement effort, even though the real fault can be said to be with the supplier.

Worries about being burned by suppliers in this manner have led the more aware companies to do a great deal of expensive due-diligence work on the software they receive from their suppliers. Life would be a lot easier — and cheaper — if companies knew which suppliers they could trust to have their act together with regard to compliance with free-software licenses. OpenChain is an attempt to lay down a set of rules that, if followed, will allow a company to certify itself as having a sufficient degree of competence in this area. Customers that trust a company's certification should be able to redistribute software received from that company with a relatively high degree of confidence.

OpenChain, in other words, is all about reducing costs and risks for companies dealing with free software. It is not concerned with details like keeping the development community healthy or preserving the freedoms associated with free software. But, arguably, that is the right approach to take when trying to convince corporate management to take compliance more seriously.

So what does it take for a company to be able to claim that its word can be trusted on compliance matters? The first step is for the company to actually understand what that means. To that end, the specification requires that the company have an actual, written policy on license compliance, and that this policy is communicated to its staff. It requires training every 24 months, mandatory for "all Software Staff" (defined to include contractors and marketing people), on the policy, on the "basics of intellectual property law" in this area, and the process for tracking free-software components. The Software Staff will, doubtless, be thrilled to have yet another training module to work through every two years. The specification also requires the creation of a process for reviewing the licenses for software supplied by the company.

Next, the company must assign responsibilities for compliance; these come down to two roles. There is the "FOSS Liaison", who is responsible for dealing with queries from outside the company, and there are people who are responsible for ensuring compliance within the company. The internal effort must be "sufficiently resourced", have legal expertise available when needed, and must have a written policy for the management of compliance issues. The specification does not get into the acceptable form for that policy; one assumes that the common "ignore complaints and deny the existence of a problem" model does not qualify, though.

To claim adherence to the specification, the company must have a process to review each free-software component in a product's bill of materials. It must also establish procedures for meeting the obligations associated with each license it deals with, and with each mode of distribution it employs. There needs to be a documented procedure for the creation of the "compliance artifacts" (source, license notifications, etc.) for each distributed project.

Contribution back to free-software projects is not necessarily a compliance issue, but the specification has a couple of requirements — policy-documentation requirements, not contribution requirements — in that area. The company's policy on how it contributes to projects must be documented, and a process must exist to implement that policy. The specification is clear that the policy can read "contributions to free-software projects are not allowed under any circumstances", all that matters is that the policy is written down and enforced.

Finally, the company has to affirm that it has met all of the above requirements; at that point, it can claim to be in conformance for the next 18 months. There is no form of external review to confirm a company's claims in this area, so receiving a software distribution from a company claiming OpenChain compliance will still involve a certain degree of trust. A supplier claiming that compliance will be showing a certain degree of awareness of the problem, which is a start, but down-chain companies will still have the choice of accepting the supplier's word that the specification has been followed or continuing to do its own due-diligence work.

For companies that are not steeped in the values of the free-software community, the OpenChain specification is likely to be useful as checklist documenting the things that need to be done to stay out of trouble. That, alone, is a useful contribution. Whether OpenChain succeeds in increasing supply-chain compliance and reducing costs will depend on how seriously companies take it. If down-chain companies start insisting on certification and, possibly, third-party certification, it could lead to better license compliance throughout the industry. If few companies bother, or if bogus self-certifications appear, then OpenChain will likely never gain any real relevance.

Comments (4 posted)

At the 2017 Free Software Legal and Licensing Workshop (LLW), Max Mehl presented some concerns about EU radio equipment directive (RED) that was issued in 2014. The worry is that the directive will lead device makers to lock down their hardware, which will preclude users from installing alternative free software on it. The problem is reminiscent of a similar situation in the US, but that one has seemingly been resolved in favor of users—at least for now.

Mehl is a program manager at the Free Software Foundation Europe (FSFE), which is the organizer of LLW. He has been working on the RED issue, which is one of the programs at the FSFE.

The RED is not a law, but instead directs EU member countries to pass laws compatible with its contents. The intent of RED is mainly to harmonize and modernize the standards governing radio equipment and to regulate software-defined radio (SDR). There are parallels to the "router lockdown" by the US Federal Communication Commission (FCC) but, in Mehl's opinion, the problem is worse in the EU.

The "radio lockdown" part of RED is just a small piece. Article 3(3) says that "radio equipment" must be built so that it complies with a long list of requirements. One of those, 3(3)(i), is where the concerns lie:

(i) supports certain features in order to ensure that software can only be loaded into the radio equipment where the compliance of the combination of the radio equipment and software has been demonstrated.

As with many things in the field of law, the definitions of the terms are important. "Radio equipment" is defined as all devices that intentionally emit and/or receive radio waves for communication, though there are a small number of exceptions (e.g. amateur radio, marine and airborne products)—the RED only applies to new devices, however. That definition could be read to apply to a wide variety of hardware, including laptops (with WiFi and Bluetooth), smartphones, routers, GPS receivers, televisions, FM radios, and so on.

The "compliance" portion of 3(3)(i) seems to say that manufacturers have to be able to prove that any software that is able to run on the hardware is in compliance with the applicable radio regulations. Those regulations include things like frequency ranges, transmission strength, purity of signal, and so on. But that piece also says that manufacturers need to implement "certain features" (which is ill-defined) to ensure that only those proven combinations can be run. That is where lockdown rears its head.

RED was adopted in April 2014, with a deadline of June 2016 for it to be implemented in national laws. At this point, Germany and other countries have not yet done so, however. June 13 of this year is supposed to be the deadline for all new devices to comply with the requirements, but that has been put on hold for now. In April, the European Commission (EC) said that the old standards can be used until the European Telecommunications Standards Institute (ETSI) finishes its harmonization and modernization work. So, right now is a transition period for the standards and for the radio lockdown part of RED; but it is only a matter of time, Mehl said, before devices will need to comply.

There are multiple actors in this particular play. ETSI is tasked with updating the standards. The EC and its DG GROW directorate are responsible for RED. The EU parliament is overseeing the work. And the EU member states are tasked with reviewing RED, implementing it, coming up with penalties for not following it, and so on.

The general idea of keeping radios from misbehaving—using frequencies or power levels that interfere with other users—may seem quite reasonable, but trying to ensure that it is not possible has a number of possibly unintended consequences. One obvious way that device makers can enforce the directive would be to only run software that is authorized for running on the device. That might use technologies like secure boot, DRM, and signed binaries to restrict what software users can install on their devices.

That would be especially bad for free-software enterprises and projects. Hardware manufacturers would somehow need to check every software package that will run on their devices. Software makers would be dependent on the hardware vendors to do those checks; those vendors could use the process to discriminate against various types of software, licenses, or companies. Free-software projects like Linux, OpenWrt, and Android could also be affected since they all work with various kinds of radio receivers and transmitters.

There are also security and privacy implications because complying with RED could add complexity and might make it impossible for privacy-friendly software to be installed. Device lifetimes would be completely at the whim of the manufacturer since users could not make their own updates or swap to something that is still being updated.

The FSFE has spent the last one and a half years working on the problem. It is trying to build an alliance with other enterprise and community actors. Part of that is the Joint Statement against Radio Lockdown that has been signed by 48 different organizations and companies.

One solution might be for the EC to define certain device classes or categories that are affected by RED such that as many devices as possible are excluded. That will take at least two to three years to happen—if it does. Several months ago, the FSFE applied to the EC to join the expert group on reconfigurable radio systems, which would assist in defining these classes or categories, but the application has not yet been answered.

There are still quite a few open questions about RED, Mehl said. The scope of devices and software is totally unclear. Linux laptops have WiFi chips (and, potentially other radio devices), does that mean new laptops cannot allow Linux to be installed? Will third-party software revisions each need to be assessed by all of the different hardware vendors? When will ETSI complete the standards update and what will that contain? How can users' and developers' rights be maintained under RED? And so on.

Mehl suggested that those interested in the issue start by talking with the FSFE. Those who support it should consider signing the joint statement. There is also a mailing list for experts to get involved with the project. Finally, supporters should also contact the EC DG GROW, ETSI, and their national authorities to further support the effort.

[I would like to thank Intel, the Linux Foundation, and Red Hat for their travel assistance to Barcelona for LLW.]

Comments (33 posted)

As of this writing, nearly 12,000 non-merge changesets have been pulled into the mainline repository for the 4.12 development cycle. About 7,500 of these have been pulled since the first 4.12 merge-window summary . Read on for an overview of what has been merged in the last week.

The not-yet-complete 4.12 merge window looks likely to be one of the busiest ever. For the curious, recent history looks like this:

Changesets pulled during

the merge window 4.0 8,950 4.1 10,659 4.2 12,092 4.3 10,756 4.4 11,528 4.5 10,305 4.6 12,172 4.7 10,707 4.8 11,618 4.9 14,304 4.10 11,455 4.11 10,960 4.12 11,869

Since the 4.12 merge window is not yet complete, the final number in that table is not actually final yet. There is a good chance that 4.12 will end up being the second-busiest merge window ever. On the other hand, 4.9 is likely to remain unchallenged in its position as the busiest for some time yet.

Some of the more interesting user-visible changes merged in the last week include:

The new GETFSMAP ioctl() command can be used to explore the physical extent mappings used within a filesystem. It can be used, for example, to determine which files contain a given physical block. This patch documents GETFSMAP . The XFS and ext4 filesystems will have support for GETFSMAP in 4.12.

command can be used to explore the physical extent mappings used within a filesystem. It can be used, for example, to determine which files contain a given physical block. This patch documents . The XFS and ext4 filesystems will have support for in 4.12. The new "function-fork" tracing option will, when trace events are limited to a specific set of processes, cause any new child processes to be added to the set.

The 9pfs filesystem can now be used to transport data between multiple Xen domains.

The kernel finally has proper support for USB type-C connectors.

The PowerPC architecture can now support virtual address-space sizes up to 512TB. By default, though, processes are limited to 128TB; that limit can be raised by passing a hint to mmap() as described in this article.

as described in this article. The ARM64 architecture now has kernel crash-dump functionality.

KVM now supports the MIPS "VZ" virtualization mechanism. On the x86 architecture, KVM has dropped support for the device-assignment mechanism; all users should be using the VFIO interface instead.

Quite a bit of the activity in this merge window took the form of new device drivers; new hardware support includes: Audio : RME Fireface 400 controllers, MOTU 828mk2 and 828mk3 controllers, Cirrus Logic CS35L35 amplifiers, Dioo DIO2125 amplifiers, Everest Semi ES7134 codecs, Hisilicon hi6210 I2S controllers, Maxim MAX98927 amplifiers, Nuvoton NAU8824 audio codecs, and STMicroelectronics STM32 digital audio interfaces. Graphics : MegaChips stdp4028-ge-b850v3-fw and stdp2690-ge-b850v3-fw display bridges, Generic LVDS panels, R-Car DU Gen3 HDMI encoders, Samsung S6E3HA2 DSI video mode panels, and Sitronix ST7789v controllers. Industrial I/O : Devantech SRF04 ultrasonic range sensors, ChromeOS EC light and proximity sensors, Analog Devices ADXL345 3-axis digital accelerometers, Maxim MAX30102 heart rate and pulse oximeter sensors, Linear Technology LTC2632 digital-to-analog converters (DACs), Linear Technology LTC2497 analog-to-digital converters (ADCs), STMicroelectronics VL6180 sensors, Motorola CPCAP PMIC ADCs, Aspeed ADCs, Maxim max9611, max9612, max1117, max1118, and max1119 ADCs, Qualcomm SSBI PM8xxx PMIC crystal-oscillator ADCs, and STMicroelectronics STM32 DACs. Media : Mediatek JPEG codecs, RainShadow Tech HDMI CEC controllers, and OmniVision OV5645 and OV5647 sensors. Miscellaneous : Arctic Sand arc2c0608 backlight controllers, Freescale i.MX23/i.MX28 ADCs, TI lighting management units, Dialog Semiconductor DA9061 power-management ICs, X-Powers AXP20X and AXP22X ADCs, Analog Devices X-Powers AXP20X and AXP22X multiplexors, ROHM BD9571MWV regulators, ROHM BD9571 GPIO controllers, TI TPS65132 Dual Output Power regulators, TI TSC2007 touchscreen controllers, Technologic Systems TS-73xx SBC FPGA managers, Lattice iCE40 FPGAs, Aspeed ast2400/2500 host LPC to BMC bridge controllers, Maxim DS2438 battery monitors, Hitachi HD44780 character LCD controllers, Xilinx LogiCORE PR decouplers, Qualcomm Technologies L3-cache performance-monitoring units, Adafruit SH1106 LCD controllers, Intel image processing units (200,000 lines of new staging code), ARM TrustZone CryptoCell C7XX crypto accelerators, Palmchip BK3710 PATA controllers, Freescale i.MX7 system reset controllers, NVIDIA Tegra power management controllers, and MediaTek pulse-width modulators. Networking : Intel Omni-Path virtual network interface controllers, Realtek RTL8723BS SDIO Wireless LAN NICs (109,000 lines of staging code), and Freescale DPAA2 Ethernet controllers. PCI : Faraday Technology FTPCI100 PCI controllers and MicroSemi Switchtec PCIe switch management interfaces. USB : Qualcomm QUSB2 and QMP PHYs and Fairchild FUSB302 type-C interfaces.



Changes visible to kernel developers include:

There is a new "virtual media controller" driver in the media subsystem. Like the "vivid" virtual camera device, it is meant to be a demonstration of the interfaces as well as a comprehensive test case.

The Android low-memory killer implementation has been removed from the staging tree.

The kvmalloc() allocation function (and a few variants) have been added. kvmalloc() will try to allocate memory with kmalloc() , but will fall back to vmalloc() if need be. These helpers have replaced a lot of duplicated fallback code elsewhere in the kernel.

allocation function (and a few variants) have been added. will try to allocate memory with , but will fall back to if need be. These helpers have replaced a lot of duplicated fallback code elsewhere in the kernel. The minitty TTY replacement was not merged, but a fair amount of preparatory work for minitty was included in the TTY pull for 4.12.

The PCI bus layer has gained support for controllers that can operate in endpoint mode. See Documentation/PCI/endpoint/pci-endpoint.txt for details.

The (relatively) new refcount_t reference-counting type was added in 4.11 and exported to GPL-compatible modules only. In 4.12 (and likely in forthcoming 4.11 stable updates) refcount_t will be changed to use EXPORT_SYMBOL() and, thus, will be accessible to all modules.

The merge window can be expected to close by May 14. At this point we have certainly seen the bulk of the changes that can be expected for this development cycle. A final update next week will cover any stragglers that show up in the final days of this merge window.

Comments (5 posted)

On April 26, the grsecurity project announced that it was withdrawing public access to its kernel-hardening patch sets; henceforth, they will be available only to paying customers of Open Source Security, Inc., the company behind this work. This move has yielded quite a bit of discussion and no small amount of recrimination. It is not clear, though, that the right conclusions are being drawn from this change.

The grsecurity patch set is intended to harden the kernel against a wide range of potential attacks. Its mitigations range from simple techniques like making important data structures read-only to relatively advanced defenses like RAP. Over the years, the developers involved have certainly broken new ground and come up with some novel solutions; as a result, these patches are appreciated by a fair community of users — a community that has lost free access to this work in the future.

Grsecurity and the mainline

These patches have not, as a whole, found their way into the mainline kernel, for a number of reasons. One is the simple fact that the kernel development community historically has not fully appreciated the need for kernel hardening. There was an increasingly unwarranted level of confidence in the kernel's inherent security and a lack of desire to face the trade-offs that often come with hardening patches. As a result, Linux fell behind the state of the art in this area, and external patches like grsecurity were the best way to increase the kernel's resistance to attacks.

Over the last couple of years or so, there has been a shift in the kernel community that has made it more open to hardening work; the ongoing efforts of the grsecurity developers certainly deserve some credit for this change. As a result, a number of technologies, most inspired by the grsecurity work, have made their way into the mainline, though the code has often changed significantly (or been entirely rewritten) along the way. Current kernels have address-space layout randomization, post-init read-only memory, hardened user-space access routines, GCC plugins for the build process, reference-count hardening, virtually mapped kernel stacks, and more. There have been numerous complaints and snide comments, especially from the grsecurity camp (example), on how these features have been implemented, and it may well be that many of them are not as strong as one would like. But they represent progress nonetheless.

Why not just merge the grsecurity patches outright? Because that is not how the kernel community works. Changes to the kernel are split into small, reviewable changes, (sometimes) documented, and each is debated on its own merits. The grsecurity patches are not split in this manner and, in many cases, they do not meet the kernel community's standards in their current form. Like it or not, there are trade-offs to be made with any patch, including security patches, and those trade-offs, which may include performance and long-term maintainability issues, must be discussed.

Security patches tend to be discussed more fiercely than others, partly because some kernel developers see a relatively small advantage to them for a relatively large cost. The value of kernel hardening is more apparent to some than to others. But security is not unique in this way; getting invasive changes of just about any type into the core kernel is never an easy task. The grsecurity developers have had little interest in doing the work to get their patches into the mainline; the same is true of the bulk of those who use the grsecurity patch set.

Nothing obligates any of these people to put in that effort, but the implication is that this work will fall to others who are interested in seeing these patches upstreamed. Finding developers willing to take that task on and deal with the ensuing discussions has not always been an easy task, but the pace of upstreaming has increased in recent years anyway. Much of this work has been done under the aegis of the Kernel Self-Protection Project (KSPP), an informal group of whom founder Kees Cook is the most visible member.

Why grsecurity went away

The withdrawal of the public grsecurity patch set has led to a certain amount of finger pointing and recrimination, with many blaming KSPP or just about anybody except the grsecurity developers themselves for the change. Consider, for example, this statement from a group that calls itself "HardenedLinux" (or "h4rdenedzer0"):

As the result of a discussion inside h4rdenedzer0, we believe that Linux foundation is the culprit behind all this result that the commercial/individual/community users losing access to the test patches.

Or this post from Mathias Krause:

I think the main reason for Brad [Spengler] and PaX Team to make their work private is the increased amount of work [KSPP] has put on them without providing any valuable work in return. They just don't want to be forced to maintain and fix-up the variants of grsecurity/PaX features KSPP lands upstream.

Both claims are somewhat difficult to back up.

The Linux Foundation does not control kernel development and does not run KSPP. What the Linux Foundation has done is to fund, via the Core Infrastructure Initiative, some of the work to bring grsecurity features into the mainline. Paying for work on free software, intended to improve the security of the Linux kernel, does not seem like a particularly blameworthy activity, so it's not entirely clear why the Linux Foundation is being faulted here. There is talk of too many press releases; these do exist, including one issued on May 4, but it seems strange to expect that the Linux Foundation should not highlight the work it is doing.

The claim that KSPP is somehow making life harder for grsecurity is also hard to justify. Those developers need not participate in this upstreaming work, and they need not accept the results of that work if they prefer their own patches. They do have to keep up with the pace of kernel development in general if they want the work that the rest of the community has been doing, but KSPP has not created that issue — that is a well-known cost of maintaining an out-of-tree patch set. The upstreaming work has also brought benefits to grsecurity, demonstrated by the fact that grsecurity has incorporated that work into its own patch set, as outlined in detail by Cook.

Much of the discussion seems based on the notion that the kernel community somehow owes something to the grsecurity developers for their work. That viewpoint overlooks something important: the entire reason Open Source Security Inc. is in business in the first place is that it gets an entire kernel, for free, that is actively maintained and extended by about 4,000 developers every year. It is a huge gift for a group of developers who, say, want to create a security consulting company. They can never repay the value of that gift; indeed, even the largest contributors to the kernel get more from it than they put in. That is why they are contributors in the first place.

In a sense, all users of the kernel owe a debt to everybody who has contributed over the years. And every one of us who has contributed has made a gift to all Linux users. The value of the grsecurity patches is significant, but it pales in respect to the value of the kernel as a whole. Grsecurity is not special in relation to the vast body of other contributions on which it is built.

So public access to grsecurity was not withdrawn due to workload, and it was not withdrawn because its developers have been somehow cheated by the development community. There is a much more likely explanation here: the movement of hardening features into the mainline kernel poses challenges to the developers' business model, which depends on providing security features not found there. Closing off access is meant to slow that movement and preserve the differentiation of the grsecurity patch set. This decision is within their right to make, as long as they stay within the terms of the GPL (and there have been no serious claims that they have done otherwise), but it should be seen as what it is: a business move intended to keep their work proprietary in spirit.

The existing patch set will continue to exist, of course, and it may well be that some suitably motivated developers will work to keep it updated with future kernel releases. Meanwhile, the work to bring more security features into the mainline kernel will continue. The kernel will get more secure over time, even without the minimal involvement of the grsecurity developers. Hopefully they will be successful in their business and will draw some satisfaction that, through the process of upstreaming, their older work will eventually improve the security of hundreds of millions of kernel users.

Comments (101 posted)

set_fs()

set_fs()

The archaeological evidence is murky, but it would appear that the kernel'sfunction was added in November 1991 by a certain Ted Ts'o; it was in the 0.10 release. It is, thus, one of the oldest APIs found within the kernel itself. Careless use ofhas always been an easy way to create security bugs; a recent attempt to make these bugs harder to exploit may instead result in this function being removed altogether.

The original role of set_fs() was to set the x86 processor's FS segment register which, in the early days, was used to control the range of virtual addresses that could be accessed by unprivileged code. The kernel has, of course, long since stopped using x86 segments this way. In current kernels, set_fs() works by setting a global variable called addr_limit , but the intended functionality is the same: unprivileged code is only allowed to dereference addresses that are below addr_limit . The kernel's access_ok() function, used to validate user-space accesses throughout the kernel, is a simple check against addr_limit , with the rest of the protection being handled by the processor's memory-management unit.

The addr_limit variable, thus, marks the partition between user and kernel space. One might think that such a limit would be fixed, with good reasons for changing it being few and far between. As it happens, there are nearly 400 set_fs() calls in the kernel. Usually, such calls are made to allow code that is normally restricted to accessing user-space memory to operate on a range of kernel memory instead. In 0.10, for example, it was added so that the exec() system call could use the normal filesystem I/O routines to read an executable image into memory that was not yet part of the calling program's address space.

The usual pattern for use of set_fs() looks like this code snippet from the splice() system call:

old_fs = get_fs(); set_fs(get_ds()); res = vfs_readv(file, (const struct iovec __user *)vec, vlen, &pos, 0); set_fs(old_fs);

This sequence temporarily raises addr_limit so that vfs_readv() , which is normally restricted to reading data into user-space memory, can read data into a kernel-space pipe buffer.

In 2010, it was discovered that, if the kernel could be made to oops between the two set_fs() calls, the second call restoring the address limit would never be made; that left kernel data open to being overwritten by user space. Hilarity, as they say, ensued in the form of CVE-2010-4258. That problem is long since fixed. In late 2016, though, an Android bug was reported for an LG touchscreen driver; there was a way to cause that driver to raise addr_limit and return to user space, once again leaving the kernel open to exploitation.

set_fs() is clearly the sort of interface that can easily create severe security bugs. It is also a tempting shortcut that tends to find its way into code of questionable quality such as out-of-tree drivers. In an attempt to harden the system against set_fs() bugs, Thomas Garnier posted a simple patch changing the system-call code so that it would check addr_limit before returning to user space. If it ever finds an incorrect value, it causes a system panic — a severe response, but probably better than allowing an exploit to occur.

Nobody disagreed with the goal of this patch, but it ran into a problem that is familiar to security developers: its impact on performance. As Ingo Molnar pointed out, the patch adds several instructions to the system-call path, which is one of the most performance-sensitive parts of the kernel. Adding overhead to system calls will slow down everything the kernel does; when one considers how many Linux machines would be executing this code on every system call, one begins to think that its carbon footprint might rival that of a small country. That is not a cost to be paid lightly.

Molnar suggested adding some sort of static analysis to the kernel build system instead. The standard pattern of set_fs() calls should be amenable to some sort of static analysis, he said, but Kees Cook argued that the problem was not quite so simple and that the cost of the patch was worth paying. "Until we can eliminate set_fs(), we need to add this check", he said.

As it happens, some other developers were already considering removing set_fs() , which has, arguably, hung around for far longer than it really should have. Christoph Hellwig suggested removing all calls outside of the core filesystem and architecture code; Andy Lutomirski went one step further and said they should all go. Without set_fs() , the kernel would be more secure, and the code that checks user-space memory accesses could become that much simpler.

Removing set_fs() depends on replacing those calls with a better alternative, of course. Many set_fs() calls exist to enable I/O to kernel-space memory; it should be possible to replace the bulk of those using the iov_iter interface. Hellwig has already started doing this replacement.

Another common pattern occurs in compatibility code where, for example, a structure passed to an ioctl() call from a 32-bit user-space process is converted to the 64-bit equivalent in kernel space, then passed to the regular ioctl() implementation. See do_compat_ioctl() in the media subsystem for an example. In such cases, it's just a matter of splitting that implementation into two pieces: one that fetches the argument from user space, and one that actually performs the desired action.

Other set_fs() calls will have to be dealt with in other ways. But it would appear that this ball is now rolling with a certain amount of momentum. Given the benefits of removing set_fs() , it would not be surprising to see much of this work merged for 4.13, with the task completed not long thereafter. It will be the end of a longstanding traditional kernel-code pattern, but it's doubtful that many developers will mourn its passing.

Comments (8 posted)