I spent a lot of time in 2003 and 2004 thinking about object orientation. Specifically, I wanted to find a better solution to duck typing, fragile base classes, deep inheritance hierarchies, cross-cutting concerns, and code reuse.

A talk by Dr. Andrew Black led me to a paper which described actual results of what I'd been designing in my head. After some discussion with the Perl 6 design team, we developed the concept of Perl 6 Roles.

In those days there was no Moose. I and several other people wrote proof-of-concept modules to demonstrate how to use roles in Perl 5. Now Moose is the state of the art in object systems, and it's definitely the way to use objects in modern Perl.

... with one exception.

Never Wrong By Design

My colleague and co-author Curtis "Ovid" Poe has written about his concerns about "silent" overriding of role methods with declared class methods. (If you haven't read my explanation of roles already, you're already lost. You just won't know it until the next paragraph.)

Ovid's code has several roles, which each provide several named methods. This is rather the point of roles: they're named collections of useful behavior and attributes. They don't stand on their own. You build up classes from collections of roles. The only relationship between roles is when you apply a role to a class (or another role, but if you haven't used roles before, ignore this parenthetical note).

Ovid declared a class and applied several roles to that class. That's normal. Then he declared several methods within that class. That's also normal. Note that, at this point, the available methods of the class come from multiple places. Some of them come from the class declaration. Others come from the applied roles. If the class inherits from other classes, other methods may come from its parent classes.

This is all normal. This is all by design. The Perl 6 Roles and Parametric Types specification says:

A class's explicit method definition hides any role definition of the same name. A role method in turn hides any methods inherited from other classes.

The method resolution is:

Methods defined explicitly in the class. Methods defined in roles composed into the class. Methods defined in superclasses.

Role composition is somewhat special, at least when compared to inheritance. Because composition is declarative, the compiler can resolve dispatch to methods at compilation time. That is, if you have two roles which declare methods of the same name, the compiler will produce an error message: you must resolve the conflict explicitly, by writing your own redispatch mechanism, excluding one variant method, or providing your own implementation of the method.

That last possibility gives away the punchline, if you read it closely.

Ovid's code composes several roles into a class which declares its own methods. At one point, someone added or renamed a method in a role or added or renamed a method in a class such that the class method took priority over the role method he expected.

At that point, he argued (successfully, it seems) that Moose should give a warning when role application behaves as specified in Perl 6.

When Warnings Attack

I don't want to argue the utility of the design, and I'm not going to discuss why adding a warning is wrong (it is, but I won't argue it directly).

It's more important to discuss good and bad examples of warnings and error messages.

Consider the (sadly optional) error checking when perl says "You used strict , but you never declared this variable!" This is useful because it tells you something about your scoping, hints that you may have made a typo in a name, and tells you where the problem was.

Consider a hypothetical warning that you've used a declared variable only once; perhaps its a parameter to a function which you increment, or decrement, or read -- but in all of its scope, you only access it once. Is this warning useful? Perhaps it indicates vestigial code that you can safely delete. Perhaps not, though: what if someone has passed a tied or overloaded object which does something useful when you read a value or increment or decrement it. That's likely bad design with all of its implications for weird action at a distance, but the compiler can't know that statically. Is the potential value for warning about vestigial code which a decent compiler can optimize away worth the potential cost of warning in the wrong circumstances?

Consider the "use of uninitialized value" warning, enhanced in Perl 5.10 with the name of the variable containing an uninitialized value. The warning suddenly became an order of magnitude more useful, as now you can check your assumptions without performing lengthy debugging sessions to determine which variable combined with which set of input produces the unexpected condition.

Of course, I know of several codebases which disable this warning. Sometimes explicitly initializing several strings to empty (not undef , but '' ) is too troublesome -- Scalar::Util's reftype() function is one of my least favorite offenders here. Then again, in a previous job where I worked with Ovid, I spent refactoring time over several days removing all potential uninitialized value warnings from a piece of code so that we could see actual problems in our code.

Is that warning valuable? Sometimes.

Consider the warning against using symbolic references in Perl 5. It's optional, but unless you're Damian Conway, Abigail, or Tom Christiansen you'll hear a lot of complaints for not using the strict pragma. For the most part, this is good advice -- it's all to easy to write code that misbehaves in difficult to debug ways without a little help from the compiler, especially when you're a novice programmer.

However, Perl 5's metaprogramming support usually requires symbolic references (but thank goodness for Moose, which alleviates this). In other words, the compiler can only determine whether you intended the behavior by the presence of the declaration of use strict; or use strict 'refs'; .

The question is whether the warning is useful enough for novices and the warned-against behavior is rare enough even for experts that the cost of disabling the warning lexically is small enough to justify the warning. Besides that, it's not enabled by default. The feature -- symbolic references -- work as designed and specified, but in general they're useful only to people who know how to disable the warning.

(One flaw in my argument here is that symbolic references are also available to everyone not yet clueful enough to ask perl to detect potentially dangerous behavior -- but turn the idea around and ask why, in 2009, this should still be.)

Role On New Warnings

When analyzing the benefits and drawbacks of new warnings, I believe it's important to evaluate at least three criteria implied earlier:

How valuable and accurate is the warning?

How common is the warned behavior actually dangerous?

How onerous is working around the warning?

In the first case, perl cannot determine the source of a collision between a method declared in a role and a method declared in a class. Is it a typo? A renaming gone bad? A false cognate? A true cognate? Deliberate behavior?

One proposal is not even to try, as if "Hey, something went wrong!" is a useful warning message. If that were the case, adding the name of the variable containing an undefined value to the "Use of uninitialized value" warning would have been of low utility.

In the second case, perl can only guess whether a locally declared method exists for disambiguating colliding methods from multiple composed roles or does something else entirely. The names of roles are as meaningless to the compiler as are the names of methods. It doesn't care. The intent of code is useless to a Universal Turing Machine Simulator. It's only useful to people -- so useful that only crazy people with something to prove write directly to UTMs.

The real question is "Why is there a collision?" Did someone misread or misunderstand the documentation for a role? Did someone make a typo? Did someone upgrade part of the system? None of this context is available to the compiler. It can only guess -- and for it to guess correctly, someone needs a statistical model of the world which demonstrates that people rarely override composed methods with locally declared methods.

You must actively debug the problem to find the source of the conflict. Which methods did each role provide? Where did all of the methods come from? What changed recently? Compilers are notoriously bad at history and change management. Fortunately, there's a perfectly good field of study which deals with change management; it's called change management.

The third case is the least important. Then again, it might be the most important. How often is this a problem? Typos in variable names are often a problem, and the compiler can detect them trivially. Metaprogramming is rare enough that enabling symbolic references in a small scope is rarely onerous (though to be fair, it's all too easy to disable them in a greater scope than you anticipated, if you assign an anonymous function to a typeglob and don't re-enable strictures within the function).

The best reason against this warning is that it thinks of roles as mixins instead of composable types, but that's a very different discussion. A class which composes several roles to take on the types of those roles instead of to reuse code -- that is, allomorphism -- may produce many, many of these warnings. Another proposal is to allow explicit exclusion of lists of methods when composing a role, as if explicit declaration of methods in two places (first when composing the role and second when declaring the methods in the class) were an improvement.

Optional or Not

One facet of this discussion rarely seen is whether a warning should be optional or mandatory. The cost/benefit tradeoffs change dramatically when you flip this switch.

Optional typo and symbolic reference checking in Perl 5 has helped a lot of bad code spread -- even the proliferation of global variables is a net drawback. It's also cost a lot of people a lot of debugging time. Perhaps it's helped more people write one-liners, but when was the last time you spent hours debugging a one-liner?

Compare that to optional Perl::Critic warnings. If your project decides that a method length of more than 24 lines is a code smell, you can determine that yourself. If you decide that a specified, implemented, and tested feature of Perl 5 is troublesome ( tie ), you can disallow it yourself locally.

You have the information. You know your history. You can improve the heuristic.