At SCALE 13x in Los Angeles, Christian Hergert presented a session about his recent work on Builder, which is often referred to as an integrated development environment (IDE) for the GNOME desktop. But, as Hergert explained, the game plan for Builder extends far beyond the typical confines of IDE functionality. Builder, as he sees it, is GNOME's opportunity to reach newcomers who are learning about programming for the very first time—then to teach them the software-development skills that will help them become contributors.

Hergert started off by explaining the roots of the Builder project—a tale that began with a brief history of the GNOME project. Over the past 15 or so years, he said, GNOME has always placed a lot of emphasis on providing a good user experience. Often that has put GNOME in a position where it uncovers pain points elsewhere in the stack—many users, for example, first discovered bugs in their video drivers when they tried early versions of GNOME 3. Similar problems were uncovered with PulseAudio configuration and network-connection management when GNOME set out to implement better user interfaces for those subsystems.

One result of all of this work in other parts of the stack, though, is "the GNOME desktop environment" today means something much larger than the window-manager-and-dock that it meant in the early days. It includes a window compositor, media pipelines, input methods, filesystem abstractions, key management, tools for working with printers and devices, and APIs for geolocation and many other services. Thus, every year, the workload to maintain GNOME gets bigger—yet the GNOME community has remained approximately the same size for quite some time. Clearly, something more could be done to attract and cultivate new developers.

The other side of the story was Hergert's, rather than GNOME's. In 2012, he had published an open offer online, saying he would teach anyone the basics of coding in C if they just emailed him. Although he expected a handful of responses, he actually got thousands. In fact, he said, his inbox is still full of requests he has not had the time to answer. "And I'm a dinosaur," he added, "for using C. Imagine what the numbers would be if we were talking about Python or JavaScript."

Hergert tried to fulfill the C lesson requests students had asked for, he said, but ultimately failed. Initially he blamed himself for that, but, upon more reflection, he stopped taking it personally and started thinking about what sorts of questions the students had asked. The tools available were not up to the challenge: important utilities like Autotools were arcane and difficult to learn. But what struck him the most was that these students were interested in learning about free software, and the inability to "get started" with programming in general was essentially driving them away from GNOME and, presumably, toward other platforms.

What GNOME needs, he decided, is some way to guide newcomers through the getting-started process while also introducing them to the platform. Android, for instance, has a "developer mode" that does more than simply switch on new privileges: it also provides tools and helps the user learn about the software.

Those are the goals of GNOME Builder, the project that Hergert initiated in mid-2014. Builder will include an IDE, but the IDE will be geared toward developing software that works on GNOME—hooking into the build system, project management tools, diagnostics, and much more. Moreover, programming tutorials will be built-in, as will templates for working with GNOME libraries and a code-search tool that is linked to the GNOME source code.

The vision behind Builder is ambitious. Hergert voluntarily quit his day job to start the project; he was able to support himself for about four months working on Builder full-time, and subsequently ran a crowdfunding campaign that will pay for at least six more months of development time. In addition to himself, he said there are about 50 active contributors.

Currently the project involves two distinct pieces: the Builder user interface and the LibIDE shared library. It is "pre-pre-alpha," he said, but is fairly usable with C. Although he ultimately plans of focusing on other languages, Hergert said it was important to get C support working first in order to bootstrap the project. As of now, essential code editing, syntax highlighting, symbol resolution, and code search are working, and there is a sizable collection of boilerplate and template code for GNOME development that Builder can access. The initial keybindings and editor functionality implement a Vim mode, though Hergert said that an Emacs mode had landed just days earlier.

LibIDE is the focus of most of Hergert's development time. One of the more interesting features is live diagnostics: if the user types a syntactic error, the UI can highlight it using squiggly underlines akin to spellchecker warnings. That functionality is possible because the library uses Clang as its back end. Hergert said he would really like to use GCC as the compiler backend, but the GCC project's decision to not implement a library-like API (for things like accessing the abstract syntax tree) makes that impossible.

LibIDE supports Automake and Autotools at present, and will add additional build systems like Meson and CMake next. Similarly, although the library only supports Git version control now, other version-control systems like Bazaar are on the to-do list.

LibIDE is also scriptable with both JavaScript and Python. Hergert showed example scripts for automatically translating newlines on each save and for triggering a rebuild whenever a specific device (say, a Raspberry Pi) is attached over USB. The latter feature relates to Builder's support for cross-compiling—it is a specific goal to make Builder a useful IDE for writing programs for mobile devices and single-board computers. The scripting functionality can also be used to add new code-search providers, build extensions, and refactoring tools, he said.

He also discussed some of the GNOME integration work that is still under development. One example was writing a program that is meant to utilize a specific D-Bus service. The traditional approach might be to allow easy insertion of some boilerplate code, but that still leaves the user with the job of figuring out the right service to connect to and what syntactic sugar is required. Instead, LibIDE will provide a way to bring up a list of the currently running D-Bus services on the local machine; the user will only need to select the service of interest, and Builder will insert the properly customized code, rather than a blank boilerplate. Similarly, he hopes to have Builder automatically "do the right thing" for tasks like putting icons in the correct directory—tasks that, while supposedly simple, are often only dealt with well after development and during the packaging phase.

The session ended with a Q&A period. An audience member asked if writing the tutorials might be an opportunity to collaborate with an existing project like Codecademy. Hergert said that although he would like to do something like that, Codecademy is not a great fit, since it is not written with Linux systems (or GNOME in particular) in mind. Another question dealt with how Builder has uncovered pain points in the GNOME stack, like Hergert had mentioned at the beginning of his talk. He replied that there were some examples; he has decided that the normal GtkTextView widget does not work well for Builder's use case, and speculated that a "text grid" widget might be necessary, to correctly deal with monospaced text navigation.

In conclusion, Hergert commented that much of what goes into making an IDE work is not visible to the user. The Builder UI is minimalist; most of the work takes place behind the scenes: "if we do our jobs right," he said, "you won't have to care about anything but the UI." Looking forward, Hergert said that LibIDE and Builder will probably not be ready for inclusion in the upcoming GNOME 3.16 release (currently expected in approximately five weeks), but he thought that GNOME 3.18 was a reasonable target.

Comments (17 posted)

At SCALE 13x in Los Angeles, Ruth Suehle spoke about the "maker" movement and its relationship to the open-source community—but she made it clear that, despite the affinity that the communities feel for each other, there are some stark differences between the two. The most troubling difference is that, particularly in recent years, the maker movement has drifted toward an "open by accident" model, without a strong commitment to freely sharing information. But open-source advocates can bring the maker movement back around, she said, by showing how they have addressed tricky problems like license compatibility and the challenge of making money while "giving everything away."

Suehle, who called herself a maker at heart, started off with a historical look at "making" in the physical world, from the advent of stone-age tools up through modern electronics. Sharing information is a through-line that permeates this history: early humans had to share information from person to person, she said. Imagine what the outcome would have been if one cave man refused to discuss discovering fire, she suggested.

But in much more recent times, people decided to stop sharing their knowledge. The ancient Greek city-state of Sybaris granted a patent-like protection to cooks, safeguarding their recipes against imitators for a year. A bit later, Roman blacksmiths started putting literal "trade marks" on their wares. In the 6th Century, the Irish missionary Saint Columba sparked one of the first conflicts over copyright when an abbot objected to Columba's practice of hand-copying books. The modern framework for patents originated with glassmakers in 1600s Venice—and rapidly spread to the rest of the world.

We now live in a world with contradictory messages about sharing, Suehle said. One of the first lessons children are taught is that sharing is important but, ironically, the adults who do the teaching no longer believe in the principle. In effect, they say "you should share your toys ... just as long as they're not my toys." This viewpoint, along with the rise of disposable consumer goods culture, led to the decline of fixing and repairing one's own property, she said.

The maker movement (at least, in the modern sense) started off as a revival of this older interest in fixing and modifying things. Suehle pointed out that the maker movement coincided with the prominence of "steampunk"—which just happens to be a throwback to an earlier era when technology was about hands-on work and tangible machinery.

Open by accident

Given its roots in the historical practice of sharing information, she said, it might seem like the maker movement should be enthusiastically committed to an "open by default" ethic. But that is not the way the maker movement is trending. Similarly, the open-hardware movement, while more formal about its principles than the decentralized maker movement, also seems to be drifting away from open-by-design ideas, with projects keeping certain parts of their work secret. Instead, she said, the movement seems to feature openness by accident, with people sharing their projects online solely because it is the "Internet age" and the Internet is the easiest way to publicize something.

By way of example, Suehle described her trip to the Open Hardware Summit in 2012. She went expecting to see lots of strong connections to the open-source movement, she said—but came away with deep concerns. Her write-up of the event for Opensource.com was headlined "Open Hardware Summit open to hybrid models," an assessment that she told the SCALE crowd was putting things optimistically.

In actuality, she found it deeply disconcerting how many high-profile speakers at the summit had downplayed or openly rejected the ideals of transparency and openness. She quoted keynote speaker Chris Anderson, who started off his talk by saying: "Everything I've learned as I built my own business is because people shared what they knew." But he followed that up a few minutes later with a different sentiment entirely, saying "I don't think we should be dogmatic. We need to consider other possibilities and approaches to open-based innovation."

In a more extreme example, she pointed out that Makerbot founder Bre Pettis had said in 2011 that "In the future, people will remember businesses that refused to share with their customers and wonder how they could be so backwards." But less than a year later, Makerbot took its previously open-hardware products closed. Pettis made that announcement at the summit:

For the Replicator 2, we will not share the way the physical machine is designed or our GUI because we don’t think carbon-copy cloning is acceptable and carbon-copy clones undermine our ability to pay people to do development.

Later, during his keynote at the event, Pettis referenced the community's reaction:

People said, 'You did open source hardware; this is totally allowed under the license. What did you expect?'" It's true. They're right. This is the result of something we did, but that doesn't mean we have to like it.

The same story was found at Maker Faires, Suehle said. In early years, the events were dominated by booths from Sparkfun and Radio Shack where visitors could learn to solder. Today, the exhibitors are predominantly there selling products—and, in many cases, products with (at best) tenuous connections to the maker movement, like Purina's latest line of cat feeders.

How open source can help

Suehle also noticed that essentially no one at these events was running Linux, which is telling. The maker community seems to be struggling today with many of the same problems that the open-source community solved ten years ago. Those problems include how to cope with project cloning, how to address legal issues, how to work with the user community, and how to make money.

The cloning issue, she said, is what Makerbot "freaked out" about, causing the company to take its Replicator2 printer proprietary. But there are plenty of success stories among those companies who release only open-source products—Suehle's employer, Red Hat, being one, she said. And there are examples of successful open hardware closely tied to the open-source software world. The Raspberry Pi, she said, has been cloned and modified and duplicated many times; "if there's a fruit, somebody has made a 'Pi' board for it," she said. Yet that has not diluted the popularity or success of the Raspberry Pi Foundation's products.

Makers and open hardware projects have legal concerns distinct from open source, she said. While open-source software is driven by copyright licensing, factors other than copyright are involved when dealing with physical objects. The community has developed two separate open-hardware licenses: one from CERN and one from Tucson Amateur Packet Radio (TAPR). Both, interestingly enough, are named "Open Hardware License." Reconciling them may prove difficult, but that is the sort of problem that the open-source community has dealt with many times in the past.

Suehle pointed out that the open-source movement resolved many of its difficulties by working through them as a community, which the maker movement will probably do as well. Today much of the maker movement community is found in local and regional hackerspaces. The hackerspaces are often isolated from one another, but there are examples where the movement is working together in large-scale, national or international efforts, which is a promising sign. She gave the open medical-device community as a key example.

The next challenge for the maker movement will be to figure out viable business models that can make money, she said. Many of the movement's highest-profile successes have been crowdfunding campaigns. They can have positive benefits, such as building an interested user community before launch, but they are still far from an instant-success formula.

The good news, Suehle said in conclusion, was that the drift away from "open by default" thinking among a few key players in the maker movement by no means spells disaster. Ultimately, the maker movement is made up of millions of people, and the open-source community can help them re-center themselves. Makers are a community that like to adapt and that thrive on innovation.

The open-source community knows both of those principles well—Suehle pointed out that open-source developers are, in fact, "makers" in their own right. The question is, what will the open-source community do to make things better in the maker movement, and to encourage the maker movement's virtuous cycle of innovation?

Comments (31 posted)

The newest update to the Krita digital painting application has been released. Version 2.9 introduces several new user-interface features, updates to the layers system, and a variety of tool and rendering improvements. The 2.9 development cycle was also the project's first to be centered around a crowdfunding campaign. In addition to raising funds, Krita's campaign allowed backers to vote on the priority of new feature work. The process was evidently successful enough that the Krita team is planning to use it during the next cycle as well.

The last major Krita release was version 2.8 in March of 2014, which we looked at back at the time. The 2.9 builds for supported platforms are available on the downloads page. Linux users, however, may find that the packages are available early through their distribution or through volunteer-run repositories. There is, for instance, a personal package archive (PPA) for Ubuntu-based systems.

The project started a crowdfunding campaign for the 2.9 cycle on Kickstarter in June, with a €15,000 goal that would be enough to fund one developer to work on twelve features to be chosen (by the donors) from a list of candidates detailed on the campaign page. A stretch goal would have support a second paid developer targeting twelve additional features for the same time period. The project beat its primary goal by a healthy margin (€19,955 in total), which meant the single-developer funding was secured. The project did, however, work on additional features during the development cycle. Backers voted on which features would be implemented, ultimately selecting an assortment of new transformation tools, improvements to the way Krita uses layers and masks, and fixes for some individual tools as the targeted feature set.

As of now, the project has successfully completed eleven of the twelve new features—in an email, Krita's Boudewijn Rempt said that the twelfth will arrive in the first point release following 2.9. It is always nice to see a free-software application have a successful fundraiser, of course, but the Krita campaign is also interesting because of the issues that the backers ranked as being of high importance. Specifically, many of them address transformations and layer manipulation, as opposed to (say) new brush engines. That suggests that Krita's core functionality—natural media simulation—is more-or-less meeting the needs of its users.

More than meets the eye

The transformation tools allow the user to warp and distort image content. The application has long had a standard-issue transform tool that could do skewing, stretching, and the like on a selected region. The new perspective transform is another option likely to seem familiar from other graphics applications; it lets the user distort the selection as if stretching it toward a vanishing point. A particularly nice touch is the red dots that help the user line up the transformation by showing where the vanishing points are. Users can also set up a "perspective assistant" using the ruler tool; the assistant is a grid overlay that provides lines running to the vanishing points so that it is easier to keep transformed objects properly aligned.

The "cage," "warp," and "liquify" transforms are a bit more interesting. The cage tool lets the user draw an arbitrarily shaped selection with straight line segments, then grab any number of vertices and move or rotate the drawing using the mouse. GIMP has a similar tool; it is most useful because the selection cage that the user draws serves to constrain where and how the distortions affect the image. A simple rectangle only has four corners to manipulate, so the content near those corners invariably get more distorted than whatever is in the center.

The warp tool is similar, except that the user chooses control points inside the selection, rather than using the vertices on the outside. Liquify is yet a third variation. Basically, it lets the user perform warp-like transformations anywhere in an image, without having to make a selection first.

The warp and liquify transformations are particularly sensitive to rendering problems; they distort the original image, so pixelated or blocky artifacts can be even more distracting if the transformation happens to enlarge the affected part of the image. Another feature funded by the Kickstarter campaign was to improve the quality of the transforms: the image data is sampled at 4x resolution and there is a new antialiasing algorithm.

Layering and masking

Several changes have been made to how the user works with layers and masks in Krita. For example, the user can now automatically create a mask for a layer that is generated directly from the layer's alpha channel. In other words, rather than trying to select all of the non-transparent pixels in a layer to mask them off, the same selection can be done with one click. Better still, the mask supports alpha transparency, so pixels with an intermediate alpha value get a semi-transparent mask.

Transparency masks are useful for protecting part of a layer from getting accidentally painted over. Krita has had support for the feature for a while, but a new feature in 2.9 is that these masks can be exported and loaded into different documents. That is primarily a feature that regular Krita users will appreciate, since it saves duplication of effort.

Transparency masks are made by painting a black-and-white layer; areas painted white are opaque (masking off the underlying image), while areas painted black are transparent and let the image beneath show through. The various values of gray in between constitute semi-transparent regions.

Finally, the transformation tools mentioned above can be applied to masks and layers, rather than operating directly on selected pixels. In essence, this means that the transformations can be saved as non-destructive operations that can be switched on and off by showing and hiding the mask or layer.

For anyone who is curious, the one new feature that did not land in time for the 2.9.0 release is a related idea. It is also one that will almost certainly be talked about by users who are converts to Krita from proprietary applications: support for "layer styles." A layer style is the ability to apply a "live" filter effect (blur, color adjustment, etc.) to a layer, so that the filter is applied to whatever content is visible from the layers below.

Photoshop has had a similar feature for a while and, rightly or wrongly, critics of free-software graphics applications are quite fond of pointing out specific differences like layer styles as evidence their applications are better. Whatever one thinks about the wisdom of implementing features first seen in proprietary applications, they are often the features that are requested the most.

Editing tools and workflow

Support for working with vector shapes has improved noticeably in the new release. Vector objects can now be rescaled to any size, regardless of the resolution of the canvas in the document. Previously, such scaling was constrained by the resolution of the document, which undermined a lot of the advantages of drawing with vectors to begin with.

Another new feature is support for gradients that follow the shape of an object. Previously, Krita had the usual palette of standard gradient types (linear, radial, etc.). The new gradient type allows for an even-looking gradient that fits into any polygon, selection, or even inside of text.

Krita 2.8 introduced initial support for the G'MIC image-filter system as a plugin. The 2.9 release finally supports the full G'MIC filter set. The user interface is the same as the G'MIC plugin for GIMP, which will certainly please all those who switch back and forth between several applications.

Another interesting change in Krita 2.9 allows the user to have the same file open in more than one window or tab. That means it is possible to see a zoomed-out perspective on the entire document while also working on a specific detail. In fact, the ability to have several images visible simultaneously is a new feature in its own right. The user can have, say, a reference image open in one part of the Krita application and work in another. In previous releases, the user would be able to switch back and forth between image tabs, but this is more convenient.

As is usually the case with Krita, there are far more small changes and updates than there is space to discuss in any detail. There are new color selectors for previously unsupported color models, there is a high-dynamic range painting mode, and there is support for saving and sharing collections of resources (like brushes and patterns) in external files.

Up next, the project plans to initiate another crowdfunding campaign to implement another batch of features—presumably including several from the 2014 fundraiser that have not been addressed yet. But the details of that next development round have yet to be announced; for now there is much for artists to explore in the 2.9 release without also worrying about what might be added further down the line.

Comments (none posted)