Enclosing the Digital Commons
By RICHARD POYNDER
There’s
growing concern that aggressive use of intellectual property is threatening the
development of the Internet. Many are drawing parallels to the “enclosures”
in pre-industrial
Europe
. Through this system, commonly held or unoccupied land was taken into private
ownership, leading to considerable abuse. But is there really a threat? If so,
why? And what are the implications?
When SBC
Communications began sending out letters earlier this year to Web site owners
demanding royalties for the patent of a widely used Web-navigation technique, a
wave of outrage rippled through the Internet.
“If your Web
site uses frames, and there’s a navigation frame on one side with links that
load content into the main frame, you’re violating their silly patent, and
they can come after you for licensing fees,” programmer Mike Gunderloy
thundered indignantly on the Larkware.com blog. Describing it as a junk patent,
he concluded, “This is so staggeringly stupid that it’s difficult to know
where to start.”
The SBC patent is
just the latest in a long line of controversial Internet-related patents. These
include the Amazon.com
1-Click patent, the Web-based shopping cart patent now owned by divine, the GIF
file algorithm patent held by Unisys, and, of course, the patent that British
Telecommunications (BT) infamously claimed gave it ownership of the hyperlink.
Among those
targeted by SBC was Museum Tour, a small Web retailer of educational toys. In
January, site founder Marilynne Eichinger received a letter demanding royalties
that, depending on the licensee’s annual revenues, range from $527 to $16.6
million a year.
Eichinger told The
New York Times: “The thing that’s disturbing me is that all of a sudden
so many groups seem to be going out and doing this. What’s happening? Why are
they all coming out of the woodwork?”
What’s
happening, says John Collins, a partner at Marks & Clerk, a London-based
patent and trademark legal firm, is that the traditional Web ethos of share and
share alike has come into increasing conflict with the commercial imperatives of
the offline world. “There is a tension between the view that says that the
Internet should be free, and one that holds that anyone who enhances the Web
should be able to patent their work and obtain revenue from it,” he explains.
In fact, many see
nothing remarkable in SBC’s actions. After all, as the company itself pointed
out in a statement, “Active protection of patent rights is a common practice
among patent holders worldwide.”
The
Web Is Different
Others, however,
see it as a thoroughly undesirable development. They argue that if offline
concepts of intellectual property are imported into the online environment, the
benefits of the Web will be significantly curtailed.
This is a view
articulated, for instance, by Lawrence Lessig. In his book The Future of Ideas, Lessig maintains that the Internet developed so
rapidly and has proven so revolutionary precisely because it was an
“end-to-end open and neutral platform.”
By spurning
proprietary interests for the common good, Lessig argues, Web pioneers created
an unprecedented environment for experimentation and creativity. It enabled a
unique “innovation commons” based not on “strong, perfect control by
proprietary vendors, but open and free protocols, as well as open and free
software that ran on top of those protocols.” In seeking to take ownership of
this platform, he concludes, corporations are in danger of killing the goose
that laid the golden egg.
Lessig is not a
lone voice. Tim Berners-Lee, the Web’s inventor, argues in his book Weaving
the Web that while “the lure of getting a cut of some fundamental part of
the new infrastructure is strong,” patents are inevitably a negative force in
cyberspace. They present, he says, “a great stumbling block for Web
development [because] developers are stalling their efforts in a given direction
whenever they hear rumors that some company may have a patent that may involve
the technology.”
But do
Internet-related patents really pose such a threat? Despite all the heat
generated by the Amazon.com 1-Click patent, for instance, Apple is the only
company that’s known to have licensed it. This demonstrates perhaps that the
feature is hardly essential to Web commerce. Moreover, when confronted by
blocking patents, people usually find ways of working around the claims. Or they
adopt a completely different approach to achieve the same end, as many did in
response to the Unisys GIF patent (which coincidentally expires this June).
And when disputes
do reach the courts, patent holders frequently find that some or all of their
claims are declared invalid—as happened when BT sued Prodigy over its claimed
hyperlink patent—and the case is often thrown out.
Roy Simmer, who
runs the PATSCAN patent search service at the
University
of
British Columbia Library
, believes Internet-related patents offer little threat. “I have done a
considerable amount of work investigating and searching e-commerce patents,”
he says. “The consensus is that the majority of these are probably invalid.”
Concern
nevertheless continues to mount about patents, particularly their potential to
impede the adoption of Internet standards. For this reason, there was a barrage
of criticism in 2001 when the World Wide Web Consortium (W3C) Web standards body
published a draft proposal aimed at introducing a new “reasonable and
non-discriminatory” (
RAND
) licensing policy. This would have allowed those providing technology for Web
standards to claim patent rights and charge royalties, thus raising fears that
patent owners would be able to hold the Web community for ransom.
Eventually, the
W3C backed down. This March, it published a new draft policy in which anyone
participating in the creation of a W3C standard would be required to agree to
license any patented technology on a royalty-free basis. However, critics have
been quick to point out that the new proposal includes a loophole that could, in
some circumstances, allow patents to be embedded in standards in a way that
would conflict with the W3C’s royalty-free ambitions.
Others argue that
since patent litigation is hugely expensive, those targeted by patent owners
often simply pay up rather than mount a legal challenge. This is what Thompson
and Morgan, a small plant-seed specialist based in
Ipswich
,
U.K.
, did when it received a letter from divine in January demanding royalties for
its patent. “We consulted with our
U.S.
attorney and decided it was cheaper to pay up and negotiate a deal with them
than to go down the litigation route,” explains
Keith Lewis
, Thompson and Morgan’s financial director.
According to
critics, paying royalties on Web patents for which the validity is uncertain is
little more than digital highway robbery.
For the moment
then, the jury is out on the long-term impact of patents on the Internet.
Certainly, many continue to complain that the Web is in the process of being
privatized, or as some like to describe it, “enclosed.”
Not
Patents Alone
This fear of
enclosure is focused not on patents alone, but on the whole gamut of
intellectual property. For instance, critics point to the frequent and bitter
disputes that are arising over domain names. They claim that companies
are—often unfairly
—using trademark law to bludgeon third parties into giving up their registered
domain names.
However, copyright
has become the most controversial issue. For some, it represents the greatest
threat to the digital commons.
For instance, as
online advertising has failed to deliver expected revenues, Web site owners have
begun introducing subscriptions, thereby walling off their content. While not
disputing their right to charge for content, many worry that doing so will
diminish the value of the Web,
particularly where it goes hand in hand with increasingly aggressive use of
copyright and trademark laws to outlaw “deep linking” (the practice of
routing Web users directly to specific articles on third-party sites rather than
to the home page).
Critics argue that
these developments threaten to devalue the Web’s most fundamental building
block, the hyperlink, and therefore destroy much of the benefit of a networked
environment. Rather than seeking to replicate outdated offline models, they
suggest that content providers should place a greater emphasis on developing new
ones that are more appropriate to the digital economy.
Is this increasing
assertion of copyright a serious problem? John Collins doesn’t think so. All
we are witnessing, he says, is the development of a two-tier Internet. “More
and more people are beginning to accept that where before they were getting
things for free, they are now going to have to start paying. Consequently, I
anticipate there will be a hard core of free information and then a lot of
premium services with premium features—including copyrighted material and
patented technology able to offer wonderful new capabilities.”
Certainly,
there’s no evidence that the flow of free content on the Web is drying up.
This is demonstrated by the explosion of blogs, if nothing else.
Adding fuel to the
fire, the debate has become linked to the wider, related controversy over
changes to copyright laws and the growing ability of content providers to
“lock up” their digital content using digital rights management (DRM)
technologies.
First, critics
complain that recent changes to copyright law have extended copyright’s period
of monopoly for far too long a time frame. In accordance with the Sonny Bono
Copyright Term Extension Act, this period is now the lifetime of the author plus
70 years. “That,” says Neil Wallis, an IP lawyer at
London
law firm Macfarlanes, “is a very, very long time. After all, if I wrote
something today and died in 2050, it would still be in copyright in 2120.”
Second, as a
result of new laws like the Digital Millennium Copyright Act and the EU
Copyright Directive, it’s now illegal to circumvent DRM copy-protection
technology, even when that circumvention takes place in order to exercise
traditional fair use rights.
While conceding
that copyright owners have legitimate reasons to fear Napster-style piracy,
critics argue that the response of content providers and legislators has been to
tip the balance too far in favor of the copyright owner to the significant
disadvantage of the consumer.
In fact, says
Anthony Murphy, director of copyright at the U.K. Patent Office, there are now
widespread fears that the combination of DRM technology and new copyright laws
will enable rightsholders to “use electronic padlocks to effectively achieve
perpetual copyright.”
The danger, argues
Gilles Caron
, director of the Bibliothèque Paul-Émile-Boulet at the Université du Quebec
à Chicoutimi in Canada, is that a great deal of copyrighted material may never
be released into the public domain. “All of humanity’s heritage is now
entering the marketplace,” he says. “If a business can buy the rights from a
descendant of an author and have these rights extended in perpetuity, it creates
a market where there was none before and opens the gates to the appropriation of
the legacy of humanity to the benefit [only] of those who have money to buy
it.”
Making this worse,
critics point out, is that content that should rightfully never be subject to
the restrictions of copyright is being privatized and exploited for commercial
ends.
Enclosing
the Public Domain
Librarians, of
course, are no strangers to the issues raised by copyright. For this reason,
they have been at the forefront of attempts to publicize the threat that it now
poses to public-domain data. For some years, for instance, they have complained
about the way in which articles and papers generated by publicly funded research
are being appropriated by publishers like Reed Elsevier.
Since researchers
are required to assign copyright to the publisher in order to have their papers
published, librarians say research that’s funded from the public purse is
inappropriately becoming the exclusive property of publishers. Moreover, they
add, as access to refereed journals is increasingly provided by means of
electronic subscriptions, permanent print holdings are being replaced by
temporary electronic access alone. This raises the possibility that subscribers
will have to pay each time an article is accessed.
“When a library
buys an electronic copy of a journal or periodical, they often do not own the
rights to the archives,” says
Jessamyn West
, who runs the librarian.net blog. “So if you buy BigJournal electronically,
you do not often have access to BigJournal’s back issues when you stop paying
them. If you bought the paper copies, you’d still own them.”
Elsevier Science
chairman Derk Haank, however, disputes this. “Customers choosing not to
subscribe anymore still own the content that they subscribed to,” he insists.
“In such circumstances, they have two options: We can either supply it to them
on CD-ROM in any format they want, or they can have us keep it for them, and we
will continue to make it available at a nominal fee.”
Moreover, adds
Haank, it’s inaccurate to suggest that publishers like Elsevier are
privatizing research or seeking to monopolize access to it. In fact, in addition
to having Elsevier publish their articles, he says, “We want authors to put
their papers on their Web site and for universities to create institutional
archives of researchers’ articles, since the more widely available research
is, the better.”
He adds: “The
only thing we cannot allow is for this to be done in a way that would enable our
competitors to create an exact copy of our products. So if, say, someone created
an archive that included metadata capable of allowing a third party to create a
virtual copy of Elsevier’s Brain
Research at the press of a button, then clearly we would feel we had thrown
the baby out with the bath water.”
Beware
the Metadata
However, the issue
of metadata looks set to become as controversial as DRM’s copy-protection
features. There’s growing concern that it’s increasing the opportunity for
commercial publishers to appropriate public-domain data.
Speaking on
condition of anonymity, a Thomson insider points out that in today’s
electronic environment, raw data represents an ever-smaller component of the
value inherent in an information product. Consequently, content providers are
adopting a new model of information distribution. It assumes that more and more
of the value resides not in the data itself, but in the access and navigation
tools for locating it, and in the metadata that define document-to-document
relationships.
“So it would be
possible for a commercial product to contain ‘free’ information,” he
explains, “but the way the customer accesses this free content would be via
value-add navigation tools (search engines, tables of contents, topical
hierarchy, citators) or internal cross-references from copyrighted documents.”
As such, the
publisher would not be charging for the information so much as for the tools to
access it. Since this would mean maintaining sophisticated document-relationship
maps and indexes as separate metadata repositories, he notes that “publishers
would jealously guard the navigation tools and metadata repositories but not try
to own all the content.”
The problem with
this model, argue critics, is that the free information at the heart of such
products may not be available elsewhere in digital format. In fact, as print is
phased out, it may not be available in any other format either. Where it’s
available elsewhere digitally, it could be so hard to locate that it’s by all
intents and purposes inaccessible. This would give commercial providers a
semi-monopoly on distribution—an outcome all the more likely given the intense
media consolidation taking place in the publishing and media industries.
This is already
evident in the legal information market, where the two big providers, LexisNexis
and Westlaw, have as good as “cornered the market for legal research
materials,” says Melissa Barr, legal resources specialist at
Cuyahoga
County
(
Ohio
) Public Library.
In a recent
article in Searcher (http://www.infotoday.com/searcher/jan03/barr.htm),
Barr describes how these companies take court opinions as well as other
public-domain information and add editorial enhancements to them. This is done
to help attorneys and judges interpret points made in the cases and locate
similar cases. Because these enhancements are subject to copyright, however, the
free data at the heart of the resulting products (which is exempt from copyright
law) is effectively appropriated. The outcome is a paid-for proprietary product.
While some of this free data is also available on court Web sites, coverage is
limited. As a result, says Barr, “thousands of older cases are not available
to those who cannot pay.”
Additionally,
since the court-hosted data is dispersed across dozens of different sites and
requires knowledge of a wide range of (sometimes inferior) search engines, all
but the experienced searcher are effectively disenfranchised from accessing
current data too. This situation is all the more worrying, says Barr, because
often “public libraries are not allowed to offer access—free or fee—to
[these companies’] legal subscription databases.”
“The courts and
the court’s words belong to us,” she concludes. “In more ways than one,
the American people have already paid for the case law produced by our courts.
Commercial vendors must not be allowed to hijack our law or dictate who may have
access to it.”
If such concerns
prove justified, it may be that although high-profile patents like those granted
to SBC, BT, and Amazon
.com will continue to attract a lot of media attention, a greater threat to the
digital commons will come from a less obvious process of enclosure. This threat
might emanate not so much from patenting basic Web features and processes, but
from the ability of large content companies to use copyright laws in order to
appropriate public-domain data and then enjoy a virtual monopoly on its
distribution. To address Haank’s fear that competitors will replicate the
resulting products, however, content providers can be expected to also seek
patents on their search and discovery tools.
At the same time,
concerns will surely grow that ever-more-powerful copy-protection techniques
will be used to keep proprietary copyrighted material out of the public domain.
In short, the death of the commons.
But is this
inevitable? Next month, in part two, I’ll look at the growing initiatives
aimed at protecting and reclaiming the digital commons.
This
article has been reprinted in its entirety from the May, 2003 issue of Information Today with the permission of Information Today, Inc., 143 Old
Marlton Pike,
Medford
,
NJ
08055
. 609/654-6266, http://www.infotoday.com.
|