Google Loses Belgian Copyright Case

Local newspapers in Belgium inexplicably don't want to be linked by Google and are using copyright law rather than a robots.txt file to enforce their wishes.

Local newspapers in Belgium inexplicably don’t want to be linked by Google and are using copyright law rather than a robots.txt file to enforce their wishes.

Bloomberg (“Google Loses Copyright Appeal Over Internet Links to Belgian Newspapers“):

Google Inc. (GOOG) lost an attempt to overturn a Belgian ruling that blocked it from publishing links to local newspapers on its online news service.

The Court of Appeal in Brussels on May 5 upheld a 2007 lower court ruling that forced Google to remove links and snippets of articles from French- and German-language Belgian newspapers from Google.com and Google.be. Google, the owner of the world’s most-used search engine, faced a 25,000-euro ($36,300) daily fine for any delay in implementing the judgment.

Copiepresse, the group that filed the suit on behalf of the newspapers, said the snippets generated revenue for the search engines and that publishers should be paid for the content. The publications have a second suit pending in which they seek as much as 49.1 million euros for the period in which their content was visible on Google News.

“This case sets a precedent,” said Flip Petillion, a Brussels-based partner with Crowell & Moring LLP, who wasn’t involved with the case. “Google has every interest in taking the debate to the highest level, they have no choice” other than to appeal, he said.

Google said it remains committed to further collaborate with publishers in finding “new ways for them to make money from online news.” Google has the option to appeal the ruling to the Cour de Cassation, Belgium’s highest court.

“We believe Google News to be fully compliant with copyright law and we’ll review the decision to decide our next course of action,” Mountain View, California-based Google said in an e-mailed statement. “We believe that referencing information with short headlines and direct links to the source — as it is practiced by search engines, Google News and just about everyone on the web — is not only legal but also encourages web users to read newspapers online.”

This makes no sense on a variety of levels. Most websites owners would consider being left out out Google search results disastrous; indeed, a whole cottage industry has sprung up around search engine optimization, using tricks to rank as high as possible in their results. Second, as noted in the opener, it’s extraordinarily easy to prevent a site from being crawled by Google.

At a broader level, though, it’s bizarre that each country has internationally enforceable copyright laws for the Web. Surely, this is an area that begs for a universal standard?

via MediaGazer

FILED UNDER: Law and the Courts, Science & Technology, , , , , , ,
James Joyner
About James Joyner
James Joyner is a Professor of Security Studies. He's a former Army officer and Desert Storm veteran. Views expressed here are his own. Follow James on Twitter @DrJJoyner.

Comments

  1. john personna says:

    I’d worry about an international standard just because US entertainment corps have been so good at driving restrictive international law. The public domain is a casualty.

  2. Murray says:

    What makes no sense to me is that people like this local newspaper have to go to court to get a ruling because Google doesn’t understand “no, we don’t want you to link to our site”.

  3. john personna says:

    I don’t think that’s what it’s about Murray. It’s about Belgian newspapers wanting Google to support their publication/profit model.

    The bizarre thing is that if they have a public site, not blocked by user ids or subscriber passwords, then it is a public site. We can all read it.

    Now, they say that some subset of us should not link to what we read.

    That’s been tried before, but I think it’s deeply wrong. You can’t offer public access and then deny people to refer to that public access. That’s all a “link to our site” is.

  4. Murray says:

    @john

    “The bizarre thing is that if they have a public site, not blocked by user ids or subscriber passwords, then it is a public site. We can all read it.”

    They don’t want to keep anyone from reading their content, they want to prevent aggregators from making revenue with their content.

    The only right or wrong to consider here is Belgian legislation and the copyright holders’ rights under that legislation.

  5. PJ says:

    Local newspapers in Belgium inexplicably don’t want to be linked by Google and are using copyright law rather than a robots.txt file to enforce their wishes.

    Neither Google nor any other company must respect the robots.txt file.

    Google may or may not respect it now, but they may, if enough newspapers adds robots.txt files, stop doing so in the future.

    If you want to make really sure that Google aren’t allowed to index or link to your news, then taking them to court is really the only way.

  6. john personna says:

    They don’t want to keep anyone from reading their content, they want to prevent aggregators from making revenue with their content.

    As I read it, Google did not aggregate content. They say above:

    We believe that referencing information with short headlines and direct links to the source — as it is practiced by search engines, Google News and just about everyone on the web — is not only legal but also encourages web users to read newspapers online.”

  7. john personna says:

    If you want to make really sure that Google aren’t allowed to index or link to your news, then taking them to court is really the only way.

    I’m concerned that this is not a Google issue. It is a public information issue.

    These publishers want “public and not public.” The web is certainly not engineered for such legal doublespeak.

    On the other hand, they may certainly password their site and make it non-public, as many newspapers do.

  8. john personna says:

    (Another way to say it is that these publishers want an implied license for any reader, that they may refuse any reader a right to link back, at their discretion.)

  9. Murray says:

    It’s not about what Google believes is legal but what IS legal.

    Besides, collecting headlines on your site without any editorial work IS aggregation.

  10. john personna says:

    So what would you have, Murry?

    Would you have people like me how happily link to stories, by title, always search out some sort of site/reader license on each site I visit? And only then link to say

    Google Loses Belgian Copyright Case

    only when I’ve been given an explicit grant of permission?

    Do you want to totally f-up the web as a public information resource?

  11. john personna says:

    Again, there is are clear mechanisms in place. There is robots.txt, and as I say, really more suited to these guy’s desire to have a non-public site, there are password protection mechanisms.

  12. Murray says:

    @john
    “So what would you have, Murry?”

    It’s not about what I want but about what the copyright holders want and the protections they have under Belgian law. End of story.

    “Do you want to totally f-up the web as a public information resource?”
    The web is not a public information resource but a collection of mostly private ones.

  13. Goggle should just stop indexing Belgian newspapers until the law changes. See how they like being invisible on the web.

  14. Southern Hoosier says:

    At a broader level, though, it’s bizarre that each country has internationally enforceable copyright laws for the Web. Surely, this is an area that begs for a universal standard?

    I can see a UN commission with China, N. Korea, and Libya setting an international standard.

  15. Gustopher says:

    Perhaps the international standard could be the robots.txt file.

    I expect Google will appeal this, and I would hope that they win.

  16. john personna says:

    Murray, maybe you just don’t understand the lucky break we got as the Internet came on line

    Copyright, if you claim it, says no one may copy.

    But every page fetch is a copy.

    Thus it is a logical and legal contradiction to post anything on a public web server and claim “copyright … all rights reserved.”

    If we’d been sticklers the public web would have been limited to truly public domain content.

    This is also the reason Ted Nelson got caught upon Xanadu as hypertext … that system had horrific complexity to support copyright and micropayment.

    Belgium can stick it. If they want Copyright, take it off public acccess.