Everybody’s been saying lots of things about the Google.be case, especially that the Belgian newspapers should have used robots.txt to tell Google what not to index. And that the fact they did not use robots.txt clearly show all they were interested is in getting money from Google…

Well, friends, I’m no lawyer or legal expert of any kind, but I’m French… and that lets me read and “almost” understand the terms of the ruling… I guess…

I think the ruling makes it pretty clear what the Belgian newspapers want, and I think this has been mistunderstood:

The papers welcome Google to index and display their news as part of Google News! (or at least they don’t care)

The papers’ particular online business model is that news are free, but access to archives require payments. Example here.

require payments. Example here. Once an article falls out of the news category and into the archives category, it should not be freely accessible any more.

Google, via its world (in)famous Google Cache, often makes the content available forever, or at least for a very long time after is has gone off the official site’s free area.

I guess that’s it: what the Beligian paper really want is a way to get the content out of Google News once it is no news any more.

Now, I’m no robots.txt or Googlebot expert either, but from what I understand there was no convenient way for the papers to tell Google that it is okay to index some content for, let’s say 2 months, but not keep it in cache after that delay.

Goggle made some general comments on the case on their blog, but:

They are not allowed to comment specifically on the ruling, so it’s not that useful;

They failed to show up at the trial, which is quite unbelievable… but would make it almost believable they fail to understand the real issue that has been raised… :roll:

Note: again, I’m no legal expert. Just trying to make a little sense of all this noise…

Be social: digg this! ;)