Many of us share a vision for the way software, free or otherwise, is developed: software is written by a programmer as “source code” and transformed through some mechanisms into “object code.” As free software activists, we are used to thinking about our legal, development, and community processes and tooling in terms of this workflow. But what happens when software which used to be written manually by humans is developed generatively through other software? How does this affect software and user freedom?

Of course, by speaking of the above I am talking about artificial intelligence (AI), a topic and term which is both compelling and vague. At one point almost everything in the world of computers was considered to be “artificial intelligence,” including fundamental building blocks like compilers. This has lead to what is sometimes called “the AI effect,” where everything is considered “artificial intelligence” until we know how to do it. This has lead to not only push back against the term “AI” but even its pursuit; why chase a concept which ceases to exist once uncovered?

I think this is missing something important: regardless of term vagueness, original visions for AI aimed for machines which could think for or program themselves. This vision permitted the idea of “generative software,” where humans were not manually writing so much of the logic of the system. But much of the resources towards AI research tapered off through the “AI Winter” which settled in through the 1980s and 1990s. Since the “AI Winter,” we've seen the majority of programming resources going towards other things like web development, graphical interfaces, business needs, games, and so on. Typically development has involved humans manually writing the logic underpinning the system.

Recently this has been changing. There has been much news around Google beating Atari games and Go champions, not through manually written strategies, but through neural networks which are trained to build something resembling human intuition. Likewise, many more companies are hiring for positions involving “machine learning” to reduce the amount of manual programming required.

In other words, the AI Winter has thawed. So where does this leave free software?

One question we might ask is, “do user freedom questions still apply?” Let's consider a scenario.Thanks to Gerald Sussman for inspiring this example through conversations at the FSF's 30th anniversary party. Imagine you are in a generatively programmed self-driving car, and the car unexpectedly swerves off the road into a ditch. Afterwards, you would like to ask the car, “why did you do what you did?” Via some mechanism, you could in theory “talk to” the machine's generated AI system to ask it why it did what it did. But will the car manufacturer permit you to do so?

Through this example, we can quickly realize that all four software freedoms still apply: the freedom to run, study, redistribute, and redistribute modified versions of an AI. It is also easy to see that not everyone might want you to have these rights; one can easily imagine a less scrupulous manufacturer saying, “I'm sorry, we can't let you talk to that AI... that's our AI.” (Thus one can easily see that even generative software should not have owners.)

So, software freedom applies, but how does it “work?” It may be hard to apply the methodologies we are used to when humans are not manually programming the software used.This led fellow free software activist Asheesh Laroia to observe that perhaps this demonstrates that “open source as methodology” was a distraction, and that software freedom was the real goal all along. Still, one can imagine collaborative methodologies that do work on the basis of sharing some dataset; perhaps many users (programmers and non-programmers alike) helping train software generated via genetic programming.

And what of our legal tools? Does copyright apply? Does copyleft apply? If not, are there other ways to protect the commons of software being developed as others attempt to lock it down?

There are multiple directions of generative software to approach, from machine learning to symbolic-based expert reasoning systems to genetic programming. Some of these systems may be more appealing than others; systems which clearly express their symbolic reasoning may be preferable (and are more “accountable”). At this time, what is most important is to get more free software activists exploring this space.

Happy spelunking!