If you use DBIx::Class in a production setting and just happen to have a substantial test suite - this post is for you.

TL;DR: Recent development in DBIC required the introduction of a subsystem that turned out to be much more complex than initially envisioned. While it *seems* that all the kinks have been worked out, the failure modes are so un-graceful (dis-graceful?) that the latest trial is in need of extra testing before it can be deemed ready for production.

Therefore if you are in a position to validate that everything behaves as expected, without the risk of taking your production to the fjords (did I mention substantial test suites?) - please help those in less favorable situations and test the thing before it goes live.

You can install the trial by any of the following methods:

HARNESS_OPTIONS=j4 cpan R/RI/RIBASUSHI/DBIx-Class-0.082700_06.tar.gz

or

HARNESS_OPTIONS=j4 cpanm -v DBIx::Class@0.082700_06

or by grabbing the tarball and doing it old school

http://cpan.metacpan.org/authors/id/R/RI/RIBASUSHI/DBIx-Class-0.082700_06.tar.gz

While the current version is deemed safe, I am being extra cautious because of recent history. So what exactly happened (and what went wrong)?

The core of the problem started in 8d005ad9, when a bugreport uncovered a massive deficiency in how we pre-process the SQLA conditions passed to, say search(). Once the initial problem got solved, I started looking around the codebase, and realized the same code (with different issues) is available in two more places. So some consolidation took place, and while the codepath *is* complex and gnarly, it was very much worth it. The following Changelog entries all hinge on this very same refactor: 13, 30, 35, 41, 44. Plus a whole bunch of "emergent fixes" as a result of the primitives working better and the metadata walkthrough being more precise.

And for a while everything seemed fine, until we got this puppy.

This was bad. It turned out that even though I wrote several dozens of brand new tests, ran the suite under multiple permutations, and tested all my dependents - the simplest of failure modes were still missed. The result was a subtle change of conditions (sometimes an OR would become and AND and vice versa). So yes - bad.

Since then the amount of tests was doubled if not trippled and a lot of changes were made to the condition processor to account for everything I could possibly think of. Of course the problem is that I can not possibly think of everything.

And this is where crowdsourcing this validation task comes in. The problem is simplified by the fact that the entire thing must remain transparent. That is - any detected anomaly is automatically a bug. So if you do notice something odd, all you have to do is get two sets of traces (by setting DBIC_TRACE=1=<filename>) and comparing them in your favorite differ. Once an offender is detected you will need to determine which search() argument is being mangled, and deliver it to me either publicly via RT or privately by email (if it contains sensitive stuff). Once I know about a problem, fixing it is trivial. Knowing what one doesn't know is the hard part.

If no issues are found (or if there is nobody to test things), the current trial will become the next stable at or around Sept 11th. Thanks in advance for all the help!

Cheers and happy search()ing ;)

