On December 12, Stanford Law School's Center for Internet and Society (CIS) launched a project aimed at standardizing trademark usage guidelines for open-source and free-culture projects. Called CollabMark, the project has made its first public release: a boilerplate trademark-usage policy intended for other projects to adopt as their official guidelines. Just as importantly, CollabMark is soliciting feedback and public discussion on the wording of the boilerplate policy. This formal examination of trademarks is relatively new to the open-source community—although the underlying concepts are not, as they echo the many previous debates about licensing, copyright assignment, and other mechanisms that govern participating in an open project.

The Stanford CIS itself (not to be confused with Harvard's similarly named Berkman Center for Internet & Society) is a program office that serves both to educate law school students about technology policy and to produce (hopefully) useful legal resources for the public. It was founded in 2000 by Lawrence Lessig, and is perhaps best known for its Fair Use Project, which provides legal representation for content creators facing copyright-related lawsuits.

CollabMark is spearheaded by Yana Welinder and Stephen LaPorte, both of whom are employees at the Wikimedia Foundation. In October 2014, the two co-authored a paper on the usage (or lack thereof) of trademarks in collaborative projects—a category that encompasses not just open-source software projects, but "open culture" projects (e.g., Wikipedia) and other, more general collaborative communities (e.g., Freecycle groups) as well.

They concluded that collaborative projects are often faced with a dilemma: the project needs to protect its logos and word marks (such as its name) against misappropriation, but the formal protections offered by trademark law seem to be a mismatch with the "information wants to be free" mindset. Unless a project is large enough to have a formal nonprofit organization register and defend its trademarks, the project tends to take an ad-hoc approach to setting (and enforcing) usage guidelines that results in confusion and, occasionally, public crises.

In 2013, Wikipedia was debating how to allow its community to use the Wikipedia logo while still retaining some level of trademark protection. The concern, like that of many collaborative community projects, was that without adequate protection, competitors could brand their own products with Wikipedia's mark, which could either confuse the public or (in the worst-case scenario) prompt a court to determine that the Wikipedia marks had become "genericized" and were therefore unprotectable. Welinder and LaPorte proposed that Wikipedia address the problem by making the Wikipedia logo a "collective membership mark"—a relatively uncommon trademark designation that functions a bit like a certification mark: it indicates membership in the controlling organization, and allows the organization to define its own standards for who is considered a member in good standing and where the mark can be used.

Ultimately, Wikipedia rejected the collective membership mark suggestion as a bad fit for the project's needs, and Wikipedia's lawyers turned their attention to writing a usable trademark-usage policy instead. They avoided legalese where possible and included a "user-friendly summary" in addition to the formal language. In April, Welinder and Luis Villa spoke about this process at the Linux Foundation Collaboration Summit, during the legal panel session that we covered.

The CMP in detail

The newly launched CollabMark site builds on this Wikipedia experience. Its most significant content is the Collaborative Mark Policy (CMP), which is an adaptation of the Wikipedia trademark policy. It uses placeholder text for the names of the organization and individuals, descriptions of the project and the marks in question, and other specifics that a project would likely wish to tailor to itself (such as a mission statement).

The text of the CMP is hosted at GitHub, and the public is invited to make comments and suggestions as issues or pull requests.

The CMP consists of seven sections. Section 1 is definitions: what "marks" are covered by the policy (e.g., logos, mascots, and names), what organization or individual holds the trademarks, and which people and groups are regarded as part of the project's community. Section 2 sets out how the marks can be used: the correct spelling and capitalization of words, when a derivative of the logo can be created, and how to phrase attributions to the project (that is, "foo is a trademark of the bar project and is used with the permission of bar").

Section 3 establishes when the marks can be used without asking for permission. This includes using the marks in conjunction with community projects, at community events, for outreach and recruiting efforts, and in general, "fair-use" conversation. It also includes a subsection (§3.5) that explicitly defines and approves nominative usage, which means referring to the project in news reports, blog posts, and other non-advertising content. Finally, it establishes that the marks can be used to create items for personal use: making your own t-shirts and birthday cakes with the project logo on them is allowed as long as they are not for sale.

Section 4 defines the circumstances in which usage of the marks is allowed if the user requests and receives the project's permission, and how the user should request that permission. The circumstances include branding community meet-ups (like hackathons), registering project-related domain names, using the project's marks at a general-purpose conference or event, using the marks in publications that do not fall under the nominative usage clause (namely print, television, movies, and "online productions"), and making commercial merchandise. The community meet-up option requires only that the user send an email to the project to announce the meet-up; all other uses in section 4 require contacting the project with the details of the requested usage.

Section 5 defines uses that are prohibited outright. The categories include creating misleading mirror sites or mimicking the project's web sites, linking from one of the marks to an unrelated site, and giving the public the impression that a personal project is sponsored or endorsed by the community.

Section 6 explains how to report suspected trademark abuses to the project via email, and gives the project the right to revoke a trademark license if "we determine that a trademark use is inconsistent with our mission or could harm community members, our project, or the Trademark steward." Section 7 explains that the trademark policy can be revised, where such revisions will be announced, and notes that if a translation of the policy causes any inconsistencies, it is the original version of the policy that takes precedence.

Moving forward

So far, there have not been any issues or pull requests posted to the CMP on GitHub. It has, after all, only been a matter of days, and it is likely that the broader open-source community will require some time to digest the content of the CMP and react to it. Such requests for changes are quite possible; the CMP is not a pick-and-choose trademark-policy generator—it sets out quite a few specific conditions for trademark usage of word marks and logos that may not be aligned with the expectations of every project.

For example, §4.1.1 is the subsection that deals with hackathons. The CMP requires hackathon organizers to notify the project of their activities, even though the CMP defines those events as "common community uses." It is not entirely clear, though, why such hackathons do not fall under the same rule as the "community-focused events" in §3.2, which do not require sending any notification to the project.

Perhaps a simple wording change would eliminate that potential source of confusion, though. A more important example might be when a project wants to take a substantially different approach to something like creating derivatives of the project logo. §2.1.3 specifies that "remixes" of the project logo are only allowed within the project, while "the logos should not be modified without separate permission from the Trademark steward" outside the project.

That is certainly a clear policy, but reasonable people in another project might define a different set of allowable modifications to the project logo. There may also be important usage scenarios not addressed in the original CMP text. It originated at Wikipedia, which is a collaborative content project: collaborative development projects may wish to consider different ways that a trademarked logo or project name could be used. Similarly, some of the wording might not fit every project; the retention of Wikipedia-specific terms like "editors" could be considered strange, as could the inclusion of "fair use" (which is usually associated only with copyright discussions).

On the whole, though, it is certainly good to invite discussion—and to do so with a concrete trademark policy to consider. The issue of how the open-source community works with trademark law is not new; Karen Sandler from the Software Freedom Conservancy has spoken about it regularly at various community conferences (we covered her talks on the subject in 2010 and in 2012), but even large and established projects continue to encounter trademark-related legal conflicts. GNOME recently dealt with an unintentional trademark violation by GroupOn, and in 2011 Bitcoin had a run-in with an outside party that knowingly attempted to trademark "Bitcoin".

It is possible that the CMP will evolve into a widely used policy that, by virtue of being common, is also well-understood. The Creative Commons licenses, for example, are disseminated broadly enough that they can be referred to by name (e.g., "CC-BY"), much like the GPL and BSD copyright licenses are, without having to pause and explain their distinctions. But not every attempt to publish a standard license that is useful to the whole community catches on; the Harmony Project contributor agreements have not seen too much adoption by outside projects, and there is no real equivalent boilerplate license for patents, which tend to only get addressed as a subsection of software licenses.

But educating projects and communities is a wise first step. In addition to the CMP, the CollabMark site hosts several other resources, such as a "Protecting your mark" FAQ with pointers to more information. Determining the trademark usage guidelines that fit a project and its community is not a trivial task—much like choosing a software license, it is a decision that requires careful consideration. But considering carefully starts with having good information to discuss, and on that front, CollabMark is a welcome effort indeed.

Comments (1 posted)

One size does not fit all when it comes to presentation software. Although LibreOffice Impress is probably the most well-known free-software presentation application, there are a number of alternatives, each offering its own distinct experience for building a presentation—and which may be a better fit for the talk or speaker at hand. One of those alternatives, Sozi, recently underwent the transition from an Inkscape extension to a stand-alone application. Previews for the upcoming Sozi 14 release have now been made available for download, as has an online demo.

A different kind of presentation

Sozi takes an entirely different approach to presentation content than the familiar one used by Impress, Powerpoint, and similar slide-deck–style applications. In previous releases, a Sozi presentation was an SVG document: each element in the document had an id attribute; "playing" the presentation was a matter of Sozi re-centering the display from one element in the document to the next. In other words, the entire presentation is a single (perhaps extremely large) image, and each successive frame is merely panning or zooming over to focus on one small piece of the whole.

For the presenter, the process of presenting is the same—left-click or hit the appropriate arrow key to advance—but the audience gets a different experience: zooming out, rotating, then zooming back in to focus on the next slide; it is a bit kaleidoscopic. The concept is not for everyone, to be sure, but it is a unique take on how to arrange a presentation, and in the right hands can be quite fun for the audience as well.

The Sozi 13.11 release from late 2013, though, was the last version of the application to stick to this approach. The newly announced Sozi 14 sticks with the same basic concept, but it completely separates the slide-arrangement and playback functionality from the creation of the SVG document.

In older releases, the user would build the presentation within Inkscape, then construct a series of invisible rectangles to serve as the slide elements: draw a box around the first thing you want to show and give it the id "frame1", then draw the next box and name it "frame2", etc. In the Sozi extension, you would select the frame elements in order, and the extension would generate the JavaScript required to hop between them and save it within the SVG file (e.g., binding mouse clicks and keyboard events to "forward" and "back" functions).

But generation of that JavaScript really had nothing to do with the design and construction of the SVG; the steps were connected only because they both took place in Inkscape. In fact, the user could create the entire SVG in some other SVG editor and only open it in Inkscape to perform the Sozi linking.

Sozi 14 takes that exact approach. It has completely decoupled itself from Inkscape, and now serves solely as a standalone slide-markup tool. It saves its presentations in HTML format—with both the SVG file data and necessary JavaScript incorporated into the file in-line. Any browser that understands SVG and JavaScript should be able to open the file, and there are no external files or folders to worry about misplacing.

In the announcement, lead developer Guillaume Savaton notes that by retooling Sozi as a standalone application, he made the user interface cleaner and could reuse code from the presentation player in the presentation editor—thus providing a more accurate preview experience. It is also far easier to package the standalone tool for Windows and Mac systems. The new builds come in two forms: a desktop client (based on node-webkit) and a hosted web application, currently running as a demo at the Sozi site.

Presenting Sozi 14

With Sozi 14, the user creates an SVG document in some other application (and, naturally, Inkscape is a prime choice for such an editor), then opens it in Sozi. From that point on, developing a presentation involves only zooming in and re-centering Sozi's viewport onto the desired portions of the SVG and clicking the add-a-frame button (marked with the + sign).

The process is reminiscent of editing a video timeline, albeit in much simpler form. The Sozi window contains the viewport displaying the SVG document in the top-left portion of the window. To the right is the Frame pane, which lists the attributes of the current frame. Sozi automatically creates a generic id attribute for each frame à la frame5458 , but the user can change the id to something more memorable. Each frame can also be assigned a separate timeout value, after which the presentation will automatically advance to the next frame, and the speed of the transition between frames can be adjusted.

In a strip along the bottom of the window is the timeline, which shows the full sequence of frames in the presentation. Clicking on a frame's tab jumps to that frame. Perhaps the only tricky aspect to working with Sozi 14 is the fact that it might not be immediately obvious how to compose a frame, since there is not much on the screen that resembles a tool.

In practice, though, what one does is click on a frame (or on the add-a-frame button), then use the mouse in the viewport, panning around and zooming in or out to frame the contents as desired. At the top of the timeline there is an easy-to-overlook radio button that switches the mouse mode between zoom, pan, and rotate functions.

There are a few other tools that provide assistance. For example, the Frame pane has a field labeled "Reference element Id." This shows the id attribute of the largest SVG element currently visible in the viewport. Assuming the "reference element" is the one that you care about, clicking the "Fit to element" button will automatically zoom in and rotate correctly to focus just on that element. It helps to choose meaningful element id s; Inkscape, like Sozi, automatically generates generic id s that are easy to forget.

Another feature that might surprise some new users is the fact that there is an "Aspect ratio" field in the editor, but no mention of screen resolution. This is because Sozi uses SVGs, which are rendered sharply at whatever screen resolution is used on the display. That is an advantage in its own right, but Sozi takes the display independence even further: because Sozi frames are (in essence) targets in the document, any aspect ratio can be supported. If the user sets the aspect-ratio setting to a different value, Sozi simply shows a viewport at the requested dimensions: no editing of the file is required. Nevertheless, an SVG file can also embed raster graphics or audio and video files (assuming the SVG editor used supports that feature); Sozi will show this content, but without the resolution-independence of vector graphics.

Present and future

On the whole, Sozi 14 is remarkably simple to use. The real work, naturally, is in creating the content that goes into the SVG file to begin with. But after that, marking up the document as a presentation with Sozi borders on trivial. The application automatically saves changes as you work. Unlike the last release of Sozi (the output of which might confuse some browsers without support for JavaScript inside SVG), Sozi 14's HTML output can be opened and played in almost any modern browser.

That said, there are still aspects of presentation-building in Sozi that require adjustment on the user's part. For example, it is quite easy to accidentally hit the scroll wheel of the mouse and inadvertently resize the image in whatever frame happens to be selected at the moment. Since Sozi automatically saves changes—and, at least for now, has no undo/redo—such a slip means accidentally messing up the frame. The "Fit to element" button also rotates the canvas to match the orientation of the "reference element," which might not always be the expected behavior. It might have been more helpful to have separate "zoom to fit element" and "rotate to element" options.

Finally, the preview builds demonstrate support for multiple layers, but at present it is not clear how layer functionality is intended to work. Sozi detects layers in the SVG file and creates a separate timeline "track" for each one, but there does not seem to be much per-layer functionality. Each layer's visibility can be toggled on or off independently, but only for the presentation as a whole and not (for example) as an action to show or hide a layer for any particular frame.

Some of the kinks in the Sozi 14 preview will, no doubt, get ironed out before the final release (which Savaton hopes to make before the end of the year). Other quirks will probably seem less quirky once the documentation catches up to the software itself. In all likelihood, though, some users will not find Sozi to be their cup of tea at all—presentation styles are as individual as presenters, and what format seems the most natural often relates directly to the subject matter at hand.

But hopefully the Inkscape-free reincarnation of Sozi will entice speakers to take a fresh look even if they stayed away from past releases. No Inkscape experience is required in Sozi 14; an SVG document can be produced (or converted) through other means. But the advantages of Sozi remain: presentations can be viewed in almost any browser, file sizes are small (and compress well), and the output is resolution-independent SVG.

Comments (9 posted)

Good test automation is a blessing that saves developers from repetitive tasks, reduces bugs introduced by human errors and, at the same time, decreases testing costs in the long term. Linux test project (LTP) is an established project that aims to bring test automation to Linux kernel development.

In this article, I will briefly introduce LTP along with its history and structure. A second article will introduce the test library API. The motivation for writing them is to help kernel developers with the unpopular and sometimes neglected task of software testing. Increasing test coverage improves the development process, reducing the development effort and making software updates more predictable. This keeps developers happy by making more time available for the development of new interesting technologies and features.

A bit of history and the current state

LTP was started in 2000 as a joint open-source project by IBM, SGI, and OSDL and was later joined by other interested parties. In 2001 it contained about 100 simple system call tests and a few test suites collected from other sources. As of today, it's maintained by SUSE, Red Hat, Fujitsu, and Oracle and gets contributions from a number of other companies and hobbyists.

The goal of the project has always been "to validate the reliability, robustness, and stability of Linux". As that motto suggests, LTP focuses on functionality, regression, and stress testing for the Linux kernel and related features. Neither running benchmarks nor analyzing benchmark results are supported and there is no plan to add that support to LTP. Readers interested in benchmarks are advised to look into MMTests developed by Mel Gorman.

A big problem for LTP is that the project goal is a bit too broad. There are two subproblems to that. The first is that LTP is relatively large (roughly 4000 C source files and around 500 shell scripts). Due to the size of the project, the content has historically varied in quality and quantity. Developers had complained about the unreliability of some of the tests. In recent years, significant effort has been spent on cleaning up that heritage, which dated back to the days of Unix wars. This was a reflection of the fact that IBM and SGI ported some of the code that became LTP from their commercial Unixes and released it under the GPL. Developers who tried LTP in the past and were unhappy with the experience are strongly encouraged to download a recent version and reevaluate.

The second problem is completeness. LTP covers fair number of system calls, ioctls, sysfs, procfs interfaces, etc. but, given that the only documentation for some kernel interfaces is pieces of source code scattered around various subsystems, even estimating the coverage is a difficult task. Unfortunately, even the documentation we have is sometimes incomplete, misleading, or wrong.

To give at least some impression of the coverage, which is quite possibly misleading, we can look at the overall number of test cases. The latest stable tarball, released in August, contains 1047 system call test cases, 1605 POSIX conformance tests in a well-maintained fork of the Open Posix Test Suite, a realtime test suite, various I/O stress tests (roughly 400), and network-related test cases, along with nearly a hundred test cases covering control groups (cgroups), various cgroup controllers, and namespaces.

LTP design goals

LTP is designed to be dead simple; the primary design goals are:

Each test is an executable.

Each test is as self-contained as possible.

Each test covers a well-defined assertion or a small group of similar assertions.

Each test runs automatically. (There is no need for manual setup nor input during the test run.)

Overall test status is passed as an exit value.

Additional information is printed to stdout.

Global parameters are passed via environment variables.

From the technical standpoint the languages of choice are C and portable shell. LTP adopted the Linux kernel coding style and the development process centers around patch review on the mailing list.

Getting and installing LTP

All released tarballs are stored on SourceForge. These are time-based releases, made four times per year. Before the release of a tarball, the main repository is frozen for anything other than fixes for a week or two while the latest code is tested on several distributions.

Then there is a Git repository on GitHub that is updated nearly daily and, depending on how far it is from the previous release, it contains a few tens of new test cases and hundreds of fixes. Therefore, the latest Git is more suitable for testing upstream kernels than is the released tarball that may be a few months old. The Git code may be broken sometimes though, especially on older, but still maintained, distributions when the kernel is missing some of the functionality that the newly introduced test cases are testing.

The installation process is pretty straightforward. The build configuration is done with an autotools configure script and the build is managed with make . LTP, by default, installs its files into /opt/ltp/ where you will also find scripts to run the tests.

$ git clone https://github.com/linux-test-project/ltp.git $ cd ltp $ make autotools $ ./configure $ make -j$(getconf _NPROCESSORS_ONLN) $ sudo make install

How to run LTP

To compile and install LTP from a Git snapshot you should do:

Single test cases can be executed directly just by executing the binaries. A few of them will need $CWD in $PATH or additional parameters. If you are looking for a documentation on a particular test case, the best place to look is in the comment at the start of the test case source code.

To run a set of test cases and to get a log file of the output you will need to use a test driver. By default, the test driver runs the default test scenario, which is a set of runtest files to execute. All runtest files that are part of the default scenario should contain reasonably stable tests. You will likely want to run only a particular subset of the runtest file(s) depending on the focus of the testing.

Although the test driver works well, there is still a room for improvement. For example, integrating the Open POSIX Test Suite that, at the moment, can only be executed separately.

The main run script is installed by default at /opt/ltp/runltp . This script is a wrapper around the ltp-pan test driver that runs test cases according to the runtest files. The runltp scripts has many optional parameters. Those used frequently include -f filename that selects only a single runtest file, -s regexp that runs only test cases whose name fits the regular expression, -d /tmpdir that selects a temporary directory for the test cases, and -g filename.html that causes runltp to produce HTML output into the file name given.

The script is expected to be executed as root and the execution times range from minutes to hours, depending on a set of tests to be executed and the speed of the system under test. After the test run, the results can be located under the result directory; there is also a list of failed test cases located under the output directory.

Historically, LTP contained test cases that were expected to fail. This is no longer true, with the exception of three pthread_rwlock Open POSIX test cases. If any other LTP test case fails, it's either bug in the system or in the test—in either case it needs to be reported and fixed.

Who uses LTP

Here at SUSE we mostly use the latest stable tarball as a part of the enterprise kernel validation for releases as well as for maintenance updates. Most of the time, that finds subtle changes in the interface between kernel and user space that either ends up as a test case fix or as a legitimate kernel bug. From my experience, kernel bugs are less common; although, as more and more test cases are fixed, the percentage of legitimate bugs grows as well.

To find out about the rest of the Linux world, I started a survey on the LTP mailing list which was quite successful—see for yourself the responses I got. Several indicated that LTP was used as part of automated testing of kernels, especially on non-x86 architectures. Others use subsets of the tests as a kind of smoke test that are typically limited to a short run duration (a few hours at most). In addition, LTP has been mentioned as one of the tools used when Linux was ported to the K1 architecture.

Conclusion

Another use of LTP worth of mentioning is its recent integration with the LKP+ project (also known as the 0-day kernel testing infrastructure ). That testing framework can catch bugs and determine which kernel commits are responsible, even before the commits reach a kernel release. Beyond that, here are a few upstream kernel commits that mentioned LTP in their Git commit log for 2014:

Although it wasn't easy, LTP has came a long way to what we have today and, as you can see, it already has been a useful tool for testing. Hopefully this article explained where we were and where we are today. That should get you started on running the tests; the next article will introduce the test library and will help with writing test cases.

Comments (5 posted)