This edition contains the following feature content:

This week's edition also includes these inner pages:

Brief items: Brief news items from throughout the community.

Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

It has only been a few years since DNF replaced Yum as the default Fedora package-management tool; that was done for Fedora 22 in 2015, though DNF had been available for several earlier Fedora releases. Since that time, DNF development has proceeded; it started a move from Python/C to all C in 2016 and has made multiple releases over the years. From an outsider's perspective, no major changes seem necessary, which makes the announcement of DNF 3, and a move to C++, a bit surprising to some.

For many years, Yum was the package-management front-end for many RPM-based distributions, including Fedora and RHEL. But it suffered from poor performance and high memory use; part of that was attributed to its iterative dependency solver. DNF was meant to fix those problems. It uses the libsolv dependency resolution library developed by openSUSE, by way of the hawkey library.

Though it wasn't a perfect drop-in replacement for Yum, DNF did replace it. But, even though DNF performed better, often much better, than its predecessor, the project continued to focus on making it faster. Ultimately, that's a large part of the reasons behind DNF 3.

Since the switch to DNF, it has steadily improved performance-wise, while maintaining the features and commands that users have been accustomed to since way back in the Yum days (just switching from " yum " to " dnf " on the command line). Under the covers, though, things were shifting around: hawkey got subsumed into libhif, which turned into libdnf. It is libdnf that is the main focus of the upcoming changes; DNF itself is envisioned as a small wrapper around the library.

In the announcement on the DNF blog, the focus on performance is evident. There is a graph comparing the in-development version of libdnf with the versions that shipped in Fedora 26 and 27. In all five of the queries tested, the new code is faster; in some cases massively so (from 77 seconds to 2.5 and 40 to 2.4, for example). Beyond better performance, the DNF 3 effort is also targeting consistent behavior between various package-management components (between DNF and PackageKit, in particular), a better defined and maintained API, and compatibility ("as much as possible") with the existing C and Python APIs. For a number of reasons, the project believes that switching to C++ will help it achieve those goals.

DNF project leader Daniel Mach posted a note about the announcement to the Fedora devel mailing list on March 22. It will probably not come as a surprise that some of the reaction was about the language choice. Matěj Cepl said:

When switching the programming [language] than I would think there are some better C-successors than C++, namely Rust? Mad rush of giving up on 46 years old language and switching to one which is just 33 years old seems a bit bizarre to me.

Using Rust would require LLVM, though; that's a step too far for Neal Gompa:

I'm okay with not dealing with LLVM for my system package manager, thank you very much. I'd be more open to Rust if Rust also could be built with GCC, and thus supported across literally everything, but no one is investing in that effort.

Gompa was also concerned about the difficulty of programming in Rust versus C++. Martin Kolman thinks C++ makes a better choice, rather than simply following the most-recent trend: "it is not bad to be a bit conservative when core system components are concerned". Given that the project has already made its choice and started development down that path, C++ is basically a fait accompli at this point.

Marcin Juszkiewicz raised the issue of sharing repository metadata between users, which is something of a longtime sore point for some. If a regular user runs DNF to query about the availability or version of some package, DNF will go off and download the latest metadata into /tmp . To some, that's a big waste of time, to others, who might not have a speedy internet connection, it is more or less intolerable. Since there is already a copy of the metadata available (root's version), it should be used by everyone, he said.

The downside of that idea, of course, is that the metadata in /var/cache/dnf may be out of date; regular users should not be able to update the system copy, so there are worries about giving outdated responses to these queries. Tom Hughes asked if Juszkiewicz had a proposal on how to share the metadata "in a secure way". He also wondered what should happen if the metadata was out of date.

Juszkiewicz had a fairly straightforward thought about how a regular user would interact with the metadata: "Use. Not download. And inform (like always) how old that data is." As he and others in the thread pointed out, Debian's APT separates the metadata update from the query functionality, while DNF (like Yum before it) combines the two; if a DNF query finds out-of-date metadata, it seeks to remedy that before proceeding. Several thread participants seemed to prefer the APT scheme (or at least some way to share read-only access to the metadata). It is not at all clear what, if anything, will happen with that, but it is an annoyance, at least, for some.

Some other thoughts and enhancements were batted around a bit, but the overall impression is one of muted approval of the idea. Most probably just hope that the changes don't really affect them at all, other than seeing faster query responses. That is probably the DNF developers' hope as well; package managers are the kinds of programs that can be mostly ignored—so long as they work.

Comments (102 posted)

Kernel developers go to some lengths to mark read-only data so that it can be protected by the system's memory-management unit. Memory that cannot be changed cannot be altered by an attacker to corrupt the system. But the kernel's mechanisms for managing read-only memory do not work for memory that must be initialized after the initial system bootstrap has completed. A patch set from Igor Stoppa seeks to change that situation by creating a new API just for late-initialized read-only data.

The most straightforward way to create read-only data is, of course, the C const keyword. The compiler will annotate any data marked with const , and the linker will ensure that it is placed in memory that ends up being marked read-only. But const only works at build time. The post-init read-only data mechanism, adapted from the grsecurity patch set, takes things a step further by marking data that can be made read-only once the system's initialization process has completed. Data structures that must be set up during boot, but which need not be modified thereafter, can be protected in this way.

Once initialization is completed, though, the (easy) ability to create read-only data in the kernel goes away. At that point, any additional memory needed must be allocated dynamically, and such memory is, by its nature, dynamic. So, while a kernel subsystem may well allocate memory, fill it in, and never change it again, there is no mechanism in place to actually block further modifications to that memory.

The proposed new API could be such a mechanism; it is called the "protectable memory allocator", or "pmalloc". The core concept is that a subsystem allocates a "pool" for a set of objects that will all be rendered read-only at the same time. The individual objects can be allocated from the pool and initialized; then the whole thing is set in stone. Or, in terms of code, one starts with:

#include <linux/pmalloc.h> struct pmalloc_pool *pool = pmalloc_create_pool();

The return value (on success) is a pointer to a pool from which objects can be allocated. Thereafter, objects can be allocated from the pool with any of:

void *pmalloc(struct pmalloc_pool *pool, size_t size); void *pzalloc(struct pmalloc_pool *pool, size_t size); void *pmalloc_array(struct pmalloc_pool *pool, size_t n, size_t size); void *pcalloc(struct pmalloc_pool *pool, size_t n, size_t size); char *pstrdup(struct pmalloc_pool *pool, const char *s);

The basic allocation function is pmalloc() , which allocates a chunk of memory of the given size from the pool . Variants include pzalloc() (which zeroes the memory before returning it), pmalloc_array() (to allocate an array of objects), pcalloc() (which perhaps should be called pzalloc_array() ), and pstrdup() (which allocates memory and copies a string into it).

When the process of allocating and initializing objects has run its course, the entire set of objects associated with the pool is made read-only with a call to:

void pmalloc_protect_pool(struct pmalloc_pool *pool);

It's worth noting that it is still possible to allocate objects from the pool after a call to pmalloc_protect_pool() . The newly allocated objects will be writable until the next call. Frequent calls to pmalloc_protect_pool() when many objects are being allocated may result in faster protection, but it may also waste any unallocated memory within the pages that are write-protected. There is no way to "unprotect" memory once it has been made read-only; the protection of memory in pmalloc is meant to be a permanent thing.

While there are numerous variants of pmalloc() to obtain protectable memory, there is no pfree() function. The only way to release this memory is to get rid of the entire pool with:

void pmalloc_destroy_pool(struct pmalloc_pool *pool);

This call will return all objects allocated from pool , so the caller should be sure that none of them are still in use.

Underneath this interface, pmalloc uses vmalloc() to obtain a range (some number of pages) of memory; that range is then managed to satisfy individual allocation requests. As a result, the pmalloc functions cannot be used in atomic context, but it is hard to imagine a situation where that would be necessary in any case. Marking the pool read-only is a matter of dipping into the internals of the memory-management layer to tweak the page-table entries for the pages holding the pool.

The advantage of using pmalloc is that it can protect objects from unwanted modifications after they have been initialized. There is one significant limitation, though. While the protection is applied to the page-table entries in the vmalloc() area containing the pool, the underlying memory remains writable in the main system memory map. So an attacker who can determine where an object sits in physical memory may be able to bypass the protection applied by pmalloc entirely. Pmalloc thus places another obstacle in an attacker's path, but is not an absolute protection against modification.

One thing that is missing from the current patch set is a set of in-kernel users of the new API. So it is not entirely clear where Stoppa intends this mechanism to be used. That omission will probably have to be rectified at some point; the kernel community is reluctant to merge new APIs without in-tree users to show how those APIs work in real-world use. Once that has been filled in, the community should have some time to debate the merits of this interface before the 4.18 development cycle begins.

Comments (7 posted)

Energy-aware scheduling — running a system's workload in a way that minimizes the amount of energy consumed — has been a topic of active discussion and development for some time; LWN first covered the issue at the beginning of 2012. Many approaches have been tried during the intervening years, but little in the way of generalized energy-aware scheduling work has made it into the mainline. Recently, a new patch set was posted by Dietmar Eggemann that only tries to address one aspect of the problem; perhaps the problem domain has now been simplified enough that this support can finally be merged.

In the end, the scheduler can most effectively reduce power consumption by keeping the system's CPUs in the lowest possible power states for the longest time — with "sleeping" being the state preferred over all of the others. There is a tradeoff, though, in that users tend to lack appreciation for saved power if their systems are not responsive; any energy-aware scheduling solution must also be aware of throughput and latency concerns. A failure to balance all of these objectives across the wide range of machines that run Linux has been the bane of many patches over the years.

There have been a number of clever ideas that have been attempted, of course. Small-task packing tries to group small, sporadic processes onto a small number of CPUs to prevent them from waking the others. Other patch sets have used a spreading technique in an attempt to evacuate CPUs with relatively low loads. There has been talk of a separate power scheduler whose job is to run each CPU at the optimal power level for the current workload. The energy cost model created a data structure to track the performance and energy cost of each processor state and used it to inform scheduling decisions. The SchedTune CPU-frequency governor allows some tasks to be designated as "important", with the less-important ones being relegated to low amounts of CPU power. Some of these ideas have influenced the mainline scheduler but, as a whole, they remain outside.

Saving energy is valuable in almost every setting from tiny embedded systems to supercomputer installations. But the pressure tends to be most acutely felt in the area of mobile systems; the less power a device uses, the longer it can run before exhausting its battery. It is thus not surprising that most of the energy-aware scheduling work has been driven by the mobile market. The Android Open Source Project's kernel includes a version of the energy-aware scheduler patches; those have been shipping on handsets for some time. Scheduling, as a result, is one of the areas where the Android and mainline kernels differ the most.

Eggemann's patch set is intended to reduce that difference by proposing a simplified version of the Android scheduler. To that end, it only addresses the problem for asymmetric systems — those with CPUs that have varying power characteristics, such as the ARM big.LITTLE processors. Since the "little" processors are much more energy-efficient (but much slower) than the "big" ones, the placement of processes in the system can have a significant effect on both energy consumption and performance. Improving task placement under mainline kernels on big.LITTLE systems is arguably the most urgent problem in the energy-aware scheduling area.

To get there, the patch set adds a simplified version of the energy-cost model used in the Android scheduler. It is defined entirely with these two structures:

struct capacity_state { unsigned long cap; /* compute capacity */ unsigned long power; /* power consumption at this compute capacity */ }; struct sched_energy_model { int nr_cap_states; struct capacity_state *cap_states; }

The units of both cap and power are not really defined, but they do not need to be as long as they are used consistently across the CPUs of the system. There is one capacity_state structure for each power state of each CPU, so the scheduler can immediately determine what the cost (or benefit) of changing a given CPU's state would be. Each CPU has a sched_energy_model structure containing the data for all of its available power states.

This information, as it turns out, is already available in some systems at least, since the thermal subsystem makes use of it to help keep the system from overheating. That is a useful attribute; it means that a scheduler with these patches could be run on existing hardware without the need to provide more information (through device-tree entries, for example).

The scheduler already performs load tracking, which allows it to estimate how much load each process will put on a CPU when it is run there. That load estimate is used along with the energy model to determine where a task should run when it wakes up. This is done by looking at each CPU in the scheduling domain where the process last ran and determining what the energy cost of placing the process on each CPU would be. Essentially, if the CPU would have to go to a higher power state to run the added load in a timely manner, the cost would be the additional energy needed to sustain that higher state. In the end, the CPU with the lowest added cost is the one the will run the new process.

The process wakeup path is rather performance-critical, so the above algorithm raises some red flags. Iterating over every CPU in the system (or even just a subset in a given domain) could become quite expensive in a system with a lot of CPUs. This algorithm is only enabled on asymmetric systems, which minimizes that cost because such systems (currently) have a maximum of eight CPUs. Those also are the systems that benefit most from this sort of energy-use calculation. Data-center systems with large numbers of identical CPUs would see little improvement from this approach, so it is not enabled there.

Even on asymmetric systems, though, this algorithm will not help if the system is already running near its capacity; in that case, the CPUs will already be running at a high power point and there is little value to be had from looking at power costs. If the scheduling domain where the process last ran is determined to be "overutilized", defined as running at 80% of its maximum capacity or higher, then the current wakeup path (which tries to find the most lightly loaded CPU) is used instead.

Some benchmarks posted with the patch set show some significant energy-use improvements with the patches applied — up to 33% in one case. There is a small cost in throughput (up to about 2% in one test, but usually much lower) that comes with that improvement. That is a cost that most mobile users are likely to be willing to pay for that kind of battery-life improvement.

Discussion of the patch set has mostly been focused on implementation details so far, and there has not yet been input from the core scheduler maintainers. So there is no way to really know whether this approach has a better chance of getting over the acceptance hurdle than its predecessors. Given that it is relatively simple and the costs are only paid on systems that benefit from this algorithm, though, one might expect that its chances would be relatively good. Acceptance would not unify the mainline and Android schedulers, but it would be a big step in the right direction.

Comments (1 posted)

The 4.16 development cycle is shaping up to be a relatively straightforward affair with little in the way of known problems and a probable release after nine weeks of work. In comparison to the wild ride that was 4.15, 4.16 looks positively calm. Even so, there is a lot that has happened this time around; read on for a look at who contributed to this release, with a brief digression into stable kernel updates.

As of this writing, 1,774 developers have contributed 13,439 non-merge changesets during the 4.16 development cycle. That work grew the kernel by about 195,000 lines overall. By recent standards, 4.16 is a relatively calm cycle, and certainly calmer than the 14,866-changeset 4.15 cycle. Still, that is quite a bit of work to integrate in nine weeks.

The most active developers in the 4.16 cycle were:

Most active 4.16 developers By changesets Arnd Bergmann 184 1.4% Chris Wilson 184 1.4% Colin Ian King 163 1.2% Mauro Carvalho Chehab 131 1.0% Jakub Kicinski 122 0.9% Russell King 114 0.8% Gilad Ben-Yossef 114 0.8% Hans de Goede 108 0.8% Al Viro 105 0.8% Markus Elfring 105 0.8% Christoph Hellwig 100 0.7% Eric Biggers 96 0.7% Christian König 94 0.7% Greg Kroah-Hartman 92 0.7% Ville Syrjälä 84 0.6% Masahiro Yamada 83 0.6% Andy Shevchenko 82 0.6% Geert Uytterhoeven 80 0.6% Darrick J. Wong 78 0.6% Thierry Reding 77 0.6% By changed lines Feifei Xu 68942 10.0% Andi Kleen 18156 2.6% Tomer Tayar 13758 2.0% Felix Fietkau 10056 1.5% Mauro Carvalho Chehab 8674 1.3% Michael Chan 7021 1.0% Gilad Ben-Yossef 7010 1.0% Hans de Goede 6849 1.0% Linus Walleij 6821 1.0% Greg Kroah-Hartman 6772 1.0% Thierry Reding 6761 1.0% Tony Lindgren 6533 0.9% Tero Kristo 6271 0.9% Jakub Kicinski 6261 0.9% Masahiro Yamada 6000 0.9% Sean Young 5148 0.7% Russell King 4988 0.7% Vinod Koul 4878 0.7% Miquel Raynal 4751 0.7% Frederic Barrat 4717 0.7%

Arnd Bergmann made improvements all over the tree, fixing year-2038 issues, compiler warnings, and more. Chris Wilson made a long list of changes to the Intel i915 graphics driver, Colin Ian King contributed many cleanup patches, Mauro Carvalho Chehab worked mostly in the media subsystem (of which he is the maintainer), and Jakub Kicinski worked extensively in the networking and BPF subsystems.

In the "lines changed" column, Feifei Xu topped the list by cleaning up some AMD graphics driver header files, removing 58,000 lines of code in the process. Andi Kleen updated perf events data for several Intel processors, Tomer Tayer made a number of changes to the QLogic Ethernet and SCSI drivers, and Felix Fietkau worked mainly on the new mt76 network driver.

Work on 4.16 was supported by 230 employers that we were able to identify, a fairly typical number for recent development cycles. The most active companies this time around were:

Most active 4.16 employers By changesets Intel 1424 10.6% Red Hat 971 7.2% (Unknown) 962 7.2% (None) 895 6.7% AMD 677 5.0% IBM 566 4.2% Linaro 524 3.9% Renesas Electronics 373 2.8% Mellanox 366 2.7% Google 365 2.7% SUSE 337 2.5% (Consultant) 333 2.5% ARM 328 2.4% Oracle 320 2.4% Huawei Technologies 295 2.2% Samsung 272 2.0% Texas Instruments 233 1.7% Broadcom 201 1.5% Netronome Systems 192 1.4% Canonical 185 1.4% By lines changed AMD 97644 14.2% Intel 73566 10.7% (Unknown) 33700 4.9% Red Hat 33027 4.8% (None) 31155 4.5% IBM 26329 3.8% Linaro 25245 3.7% (Consultant) 20772 3.0% Cavium 18173 2.6% Samsung 16587 2.4% ARM 16368 2.4% Broadcom 13868 2.0% Texas Instruments 13597 2.0% Code Aurora Forum 13437 2.0% Oracle 13335 1.9% Bootlin 13038 1.9% Mellanox 12999 1.9% Google 12281 1.8% Huawei Technologies 11781 1.7% ST Microelectronics 9672 1.4%

As usual, there are few surprises here. While a lot of companies support work on the Linux kernel, the list of companies that contribute the most work remains pretty steady from one development cycle to the next.

As can be seen here, the 4.16 cycle in general was short on surprises; it can be seen as a sort of return to normal after the ups and downs of 4.15 brought about by the response to the Meltdown and Spectre vulnerabilities. At this point, most of the work to deal with those issues has been done, so the kernel community has gone back to work producing ordinary releases.

Some stable statistics

There have been a lot of relatively large stable kernel updates in recent times; it appears that the pace of fixes going into the stable trees has increased. Curious as to whether that was true or not, your editor crunched some numbers from stable kernel repository on kernel.org, which contains the history of most of the stable kernel releases; the results looked like this:

Stable kernel update activity since 3.0

A few things do stand out from those charts. The 3.x era saw quite a few kernels receive extended maintenance, often with various distributors maintaining kernels that only they shipped. Over the last couple of years, that pattern has evened out to one kernel release per year. The policy of identifying the long-term-stable releases ahead of time and getting distributors to base their work on those releases appears to be paying off.

The number of changesets applied to stable kernels does seem to have grown a bit over time. The 4.15 kernel has received nearly 1,100 changesets already, and it's only been out for a couple of months. 4.14, which is a long-term release, has received nearly 2,900 fixes since its release on November 12. Some of the numbers for older releases are impressive as well; 4.9 has received 6,600 fixes, while 3.2 has gotten nearly 8,800. That is a lot of changes going into "stable" kernels.

One interesting reason for all of these fixes is a more aggressive effort to identify fixes that should go into the stable trees, even if the developers of those fixes and the maintainers who merge them didn't identify them as such. That includes a semi-automated component that Greg Kroah-Hartman described this way:

Seriously, it's close to magic, there's a tool that Sasha [Levin] is using that takes "machine learning" to match patches that we have not applied in stable kernels to ones that we have, and try to catch those that we forgot to tag for the stable tree. Not all subsystems mark stable patches, so this is an attempt to catch those fixes that should be getting backported but are not either because the developer/maintainer forgot to mark it as such, or because they just never mark those types of patches.

If anybody has wondered why they have to plow through vast numbers of "AUTOSEL" messages in the linux-kernel list, this is why. The mailing-list traffic may be annoying to the few of us who still follow linux-kernel, but many of the fixes identified by this tool are being backported as far as 3.2 and made available to users. That said, it is worth noting that some developers are not entirely comfortable with the amount of backporting that is being done.

The 4.4-stable series has seen 124 releases as of this writing. Here is a summary of where the 7,575 fixes applied to 4.4 came from:

Source of 4.4-stable patches Developers Arnd Bergmann 202 2.7% Takashi Iwai 167 2.2% Greg Kroah-Hartman 163 2.2% Johan Hovold 144 1.9% Eric Dumazet 125 1.7% Alex Deucher 76 1.0% Dan Carpenter 73 1.0% Al Viro 70 0.9% Thomas Gleixner 59 0.8% Eric Biggers 56 0.7% James Hogan 54 0.7% Andy Lutomirski 49 0.7% Herbert Xu 42 0.6% Nicholas Bellinger 42 0.6% Florian Westphal 41 0.5% Steven Rostedt 39 0.5% Jan Kara 38 0.5% Alan Stern 37 0.5% Tejun Heo 36 0.5% Trond Myklebust 35 0.5% Employers Red Hat 788 10.5% (None) 608 8.1% Intel 557 7.4% Google 504 6.7% SUSE 478 6.4% (Unknown) 461 6.2% Linaro 354 4.7% IBM 317 4.2% (Consultant) 217 2.9% Oracle 205 2.7% Linux Foundation 197 2.6% AMD 166 2.2% ARM 141 1.9% Imagination Technologies 135 1.8% Mellanox 101 1.4% Canonical 100 1.3% Samsung 91 1.2% Facebook 85 1.1% Broadcom 76 1.0% Linutronix 74 1.0%

All of this activity in the stable trees makes it clear that the development of a kernel release doesn't stop when Linus Torvalds declares it ready and moves on. By the time a kernel release gets to users, it will likely have had thousands of fixes applied to it. Efforts within the community to get vendors to use the long-term stable kernels appear to be paying off, so more of those fixes are actually reaching the users that need them, which can only be a good thing.

Comments (17 posted)

Many people have seen music visualizations before, whether in a music player on their computer, at a live concert, or possibly on a home stereo system. Those visualizations may have been generated using the open-source music-visualization software library that is part of projectM. Software-based abstract visualizers first appeared along with early MP3 music players as a sort of nifty thing to watch along with listening to your MP3s. One of the most powerful and innovative of these was a plugin for Winamp known as MilkDrop, which was developed by a Nullsoft (and later NVIDIA) employee named Ryan Geiss. The plugin was extensible by using visualization equation scripts (also known as "presets").

Sometime later, a project to implement a cross-platform, MilkDrop-compatible, open-source (LGPL v2.1) music visualizer began: projectM. The main focus of the project is a library (libprojectM) to perform visualizations on audio data in realtime—using the same user-contributed script files as MilkDrop—along with reference implementations for various applications and platforms. The project, which began in 2003 and was first released in 2004, is of interest to many for its creative and unique visuals, its use by media-player projects, and its interesting design and features. After years of development and contributions, the project stalled, but now there are efforts to rejuvenate and modernize the code.

How it works

LibprojectM is written in C++ and has a simple interface. The host application is responsible for creating an OpenGL context for the library to draw in, and then feeds in PCM audio data. From the audio data, the library extracts bass, mid, and treble amplitudes using the standard fast Fourier transform (FFT) that all audio visualizers use; it also attempts to perform beat detection to make the visuals synchronize with the music better. Each frame, the host application simply asks projectM to render its current visualization; it does that by drawing into the current OpenGL context. Here is the basic idea:

renderFrame() { glClear(...); projectM->pcm()->addPCMfloat(pcmData, 512); projectM->renderFrame(); flipOpenGLBuffers(...); }

When projectM renders a frame, it does so by passing the features extracted from the audio data to equations drawn from the currently selected MilkDrop preset file. There are two sets of equations (described in detail here), one set is evaluated per-frame and describes the shape, rotation, size, and colors of the waveform being drawn from the FFT data, and the other set is per-vertex equations for more complex transformations and deformations. These per-vertex equations are calculated for each point in a mesh (the points are the vertices), and then interpolated to the current screen display size, allowing less-powerful computers to reduce the number of vertices in order to sacrifice some accuracy for speed. MilkDrop preset files can also contain DirectX GPU shader programs, allowing for more complex programs and faster computation.

There are a number of other features for projectM, in varying degrees of completeness, including the ability to render to an OpenGL texture, better support for GPU shaders, "native" presets written in C++, preset playlists, the ability to control the speed of transitions between presets, and text overlays with Freetype GL.

Work needed

While the core of libprojectM works well and performs its task dutifully, there is a great deal of work still needed to make it a modern and usable piece of software. The development stalled for a period of several years until renewed interest restarted efforts to continue improving it.

ProjectM's OpenGL drawing is done with what are known as legacy "immediate-mode" calls. This OpenGL interface is one where the application transfers vertex information for each frame via API calls to the GPU. Drawing a triangle looks something like this:

glBegin(GL_TRIANGLES); // we are going to feed in color and geometry to generate triangles glColor3f(1.0f, 0.0f, 0.0f); // draw red glVertex2f(0.0f, 0.0f); // draw three vertices in object space glVertex2f(25.0f, 25.0f); glVertex2f(50.0f, 0.0f); glEnd(); // finished drawing triangles

This is inferior to the newer API where vertices are uploaded to the GPU in the form of vertex buffer objects (VBOs); in contrast, the immediate-mode calls need to send the geometry every frame from the CPU with numerous API calls. The geometry of a VBO only needs to be sent to the GPU once if the geometry doesn't change. VBOs can be colored and deformed through the use of shaders, which are GPU programs that operate on vertices and pixel colors.

The modest number of immediate-mode calls that projectM makes need to be converted to VBO code. This will not only improve performance but it is also needed for compatibility with newer OpenGL implementations, namely OpenGL ES, which is the specification for embedded systems. Until the OpenGL calls are updated, it is unlikely an open-source projectM will ever be able to run on an embedded system, as many have desired. It's worth noting that this work was done by one of the main authors of projectM and a previous maintainer, but he has not chosen to share his modifications with the project.

MilkDrop was heavily Windows-based, implemented with DirectX, Win32 APIs, and assembler. ProjectM did a good job of replicating the functionality in a cross-platform manner but one DirectX-specific piece remains: the shader code in the preset files. Some presets can contain GPU shader programs as mentioned previously. Because they were written for MilkDrop, they are in HLSL, a shader language for DirectX. Support for HLSL was provided in projectM by NVIDIA's Cg toolkit, but that has long been deprecated and is unsupported. Either manual or automatic conversion (possibly using something along the lines of HLSL2GLSL for Unity) needs to be added along with code to compile and upload the shaders. This would greatly increase performance and capabilities, enable the most advanced presets, and drop the dependency on an out-of-date and unsupported proprietary framework.

Building with GNU Autotools

Work was done recently to replace the problematic CMake-based build system with GNU Autotools. This was done partly because of the limitations and complexity of CMake coupled with its less-ubiquitous status; Autotools is more widely known. The core library and SDL2-based example host application are building for macOS, Linux with GCC and Clang, and FreeBSD 11 without incident using a well-known build command:

$ ./configure && make

The other reference implementations that come with the project have not been updated yet to build with Autotools so some small amount of work is needed there. It is unclear which implementations (if any) are actually in use by anyone. The switch to Autotools has caused some small disturbances with downstream package maintainers who may lump the library itself together with the reference implementations; the difference in build systems can lead to confusion about the state of the project.

There were some compile-time feature flags exposed by CMake, such as render to texture and Freetype GL support; these need to be added to the Autotools-based build. Anyone with GNU Autotools experience who wants to help finish the migration for the various implementations and feature flags is sought. Support for building on Windows would be useful as well.

Reference implementations

projectM's source repository contains many examples of host applications and libraries making use of libprojectM, including XMMS, JACK, PulseAudio, macOS iTunes, Qt, screensavers, and libsdl2. Many of these are in a questionable state, some have not been updated in approximately a decade. Work needs to be done to go through the various implementations and either update them or determine if they should get the axe. The two most recently updated were the macOS iTunes plugin, which enjoyed some recent popularity until an update caused it to stop functioning properly, with no cause determined as of yet.

The most useful and up-to-date reference implementation is based on libsdl2, which is a cross-platform media library used in many media applications and games; it is currently supported in large part by Valve. This program can read in audio from a capture device, which is a feature only recently added to libsdl2. If improvements were made, such as proper configuration file support (already implemented in libprojectM), more keyboard-input support, and adding some sort of basic user interface, the projectM SDL application would make a nice cross-platform standalone audio-visualization program.

Higher-level language bindings, such as for Python, could be also useful.

Community involvement

Of course, projectM is an open-source project like any other. It's been put together over the years by contributions from developers who like the project and find it useful, either for getting stoned and watching trippy visuals with their dubstep MP3s, for integrating into media players, or for projecting at live music shows. There is interest in the project, but more developers and contributors are needed to modernize and polish it. That will make it easier for other open-source projects to include a powerful and versatile music-visualization library.

Contributors are welcome to come file issues and send pull requests at the GitHub repository. There is a legacy SourceForge page that is still up, but it is not where the development is happening. While there are plenty of useful features inside of projectM there is still lots of work to be done to make the code base usable by end users as well as downstream maintainers, portable to new platforms such as embedded and web, and keeping pace with changing libraries and APIs.

[Mischa Spiegelmock is the current maintainer and a small-time contributor to projectM.]

Comments (18 posted)

We may need Tor, "the onion router", more than we ever imagined. Authoritarian states are blocking more and more web sites and snooping on their populations online—even routine tracking of our online activities can reveal information that can be used to undermine democracy. Thus, there was strong interest in the "State of the Onion" panel at the 2018 LibrePlanet conference, where four contributors to the Tor project presented a progress update covering the past few years.

According to panelist Nathan Freitas of the Guardian project, many people are moving from virtual private networks (VPNs) to Tor. And in turn, the open research done by the Tor community is being used by VPN providers to improve their own security. Some background here may be useful: a lot has been heard over the past few years about VPNs. Worries about snooping have led businesses and individuals to install them, but they weren't really designed for anonymous Internet use. Their goal is not to prevent attackers from knowing that person A communicated with person or site B—which is crucial connection information that anonymous Web users are trying to hide—but just to encrypt the communications themselves. VPNs are also designed to be integrated into organizations' internal networks, more than for standalone use on the Internet.

User experience (UX) was a major topic on the panel, especially if the term is taken broadly. Isabela Bagueros, UX team lead at Tor, said the project looks into UX far beyond just the appearance or behavior of the browser. The team also takes network performance and community feedback into account. Thus, many topics discussed by the panel—such as porting Tor to Android devices and improving memory use—can fall under the heading of "user experience".

Bagueros explained that Tor is not like traditional Internet projects that can routinely collect information on user behavior. Tor has to diligently protect its users' anonymity and avoid collecting any data without consent. The project can, however, recruit users to voluntarily let it collect information on performance and related browsing experiences. Tor is currently seeking to hire a director for its user testing project and has another position open for a user advocate.

Improvements in the user interface include more consistent fonts and colors, and a clearer display of circuits—how a user's Web requests travel through the routers in Tor's network—along with tools for viewing details. A new style guide allows far-flung free software developers to develop new tools that stay consistent to the choices made by designers for Tor's interface, Bagueros said. Documenting the style should in turn make development go faster, meaning more features in a timely manner. Steph Whited, communications director at Tor, also described a new guide to relays, which should help increase the size and reach of the Tor network.

Many popular Web sites that are frequent targets of blocking offer Tor access through the .onion domain. Bagueros said that Tor is encouraging these sites to prompt non-Tor visitors and let them know that .onion access is available.

Android support is becoming critical as people in developing nations seek safe access to the Web. Tor is important, for instance, for LGBTQ people in many Middle Eastern countries. It is also popular in Brazil and Indonesia, Freitas said, where many more people have access to mobile devices than to personal computers. The Android app for accessing Tor is currently called Orfox, but Freitas said it will soon be named simply "Tor Browser for Android", to reduce confusion. Android users can also choose to route particular apps through Tor. A #tor-mobile IRC channel is devoted to this project. Freitas reminded us that a user would have more secure anonymity by running the Tor browser on a free operating system such as GNU/Linux, but Tor on Android is better than no Tor at all.

Freitas said that people are even running their own routers on mobile devices. Tor puts extra resource burdens on these devices, of course, because of the constant network and memory use. This leads us to the comments by panelist Nick Mathewson (who is one of the founders of the Tor project) on network improvements.

Mathewson said that a recent distributed denial-of-service attack on Tor—either a malicious attack or possibly a poorly designed browser that went haywire—prompted the network developers to significantly improve Tor's efficiency and, in particular, to reduce its memory consumption. This should make it more usable on mobile devices as well as reduce its overall footprint. The list of routers returned to every Tor user is more compressed now, and is updated more frequently with smaller updates, which should also reduce the network burden for mobile devices.

When testing Tor on mobile devices, Mathewson said, developers learned that it consumed far too much power, causing Android to respond by putting Tor to sleep and re-awakening it as often as eleven times per second. The team has greatly reduced power usage since that finding.

Anonymity is improved by new router names that are more resistant to enumeration attacks. Previously, attackers could get access to the names of existing routers; now the attackers have much greater difficulty finding out that the routers exist. The new names are longer and harder to type and remember, but they are much more secure. Mathewson said that Tor developers are talking to other projects, such as Bitcoin, to learn how to make secure names that are more human-readable and memorable. Mathewson also said that Tor should be resistant to quantum computer attacks on its crypto by this time next year, an intriguing boast that I would love to hear more about. Finally, Mathewson said that a lot of development is moving to the Rust programming language, which is expected to greatly reduce buffer overflows and similar kinds of problems.

The panelists reported that China is blocking the IP addresses of relays that it sees being used as exit points to access Web resources. Tor is taking some steps to make it more expensive to block them.

On the communications side, Tor offers new web sites for support and for the community. Whited described some of the steps the project is taking to raise its visibility and connect more consistently with users and its fan base. An "Onion Everywhere" campaign is trying to increase the use of Tor. Tor is tweeting more often and posting to its blog at least once a week. The project is publicizing human interest stories about journalists and others who are using Tor to benefit the public interest. One recent app allows people to submit evidence to the International Criminal Court anonymously through Tor, for example.

A member of the audience who works with the distributed social network Mastodon suggested integrating it with Tor, which Mathewson said was an interesting idea but probably could not be a priority for the busy Tor network developers.

This panel illuminated responses that dedicated Tor developers and staff are making to the growing demand for safe, anonymous Web browsing. It certainly gave the impression that onion routing is a critical part of the contemporary Internet structure, to give everyone in the world access to information they have a right to have. I'm sure that attacks on Tor will increase, and that we'll hear more in the mainstream press about both the access provided by onion networks and the challenges they face .

Comments (15 posted)