Theodor Holm Nelson, Project Xanadu
 Sausalito, California

I was recently surprised to see my name and my original micropayment concept discussed by Walter Isaacson in the hallowed columns of TIME, even more surprised to hear him say my name also on the Jon Stewart show.  In both these venues, Isaacson brought forward ideas-- micropayment and its benefits-- that have been waiting a long time.

His article has earned controversy in many places, including the New York Times.  But those who have commented on Isaacson's work have seen only one side of the picture, imagining micropayment, say, as paying $2 for a subscription or 10¢ for an article on line.  As today's kids would say, THAT'S SO NOT MICRO!

I'd like to put the Micro back in micropayment, and bring back the rest of the idea.

First, I must clarify one issue.  The way Isaacson phrases it, I thought of hypertext in 1960 just in order to make micropayment possible: "Hypertext an embedded Web link that refers you to another page or site had been invented by Ted Nelson in the early 1960s with the goal of enabling micropayments for content."

This has it inside-out.  I came up with the idea of micropayment to make hypertext possible.  (Note that I coined both these words, hypertext and micropayment.)  But the hypertext I envisioned was very different from what we see now.  In 1960 there was no such thing as an "embedded Web link", since there was no Web, and my designs were very different.

Now, some people say the World Wide Web was my idea, but I make no such claim.  My idea was better.

The World Wide Web, a hypertext system seemingly so radical, is based on and locked to many computer traditions: a document simulates a rectangular piece of paper; a document is entirely represented in one file; a document consists of a long string of characters (and if any characters are quoted from elsewhere, their previous identity is lost); a document's formatting and links are mangled into the text content ("embedded markup"); a document's structure is hierarchical; and a document can only have one-way links (since the document is in one file, its links can only point outward).

All these conventions, I believe, are mistaken.  I can only hint here at how deeply they limit us.

Designing hypertext in 1960, I was bound by no such computer traditions, since they didn't exist.  Return with us now to those thrilling days of yesteryear, and think what might be possible if we begin from scratch, with no World Wide Web to lock us in.

Whoosh!  Here we are back in 1960.  Loosen your mind.  The computer screens are blank (in fact there are almost no computer screens, except in government hands).  And there are no computer traditions.  With no computer traditions, documents don't have to be rectangular (we can imagine computer documents as free-form, even changing, shapes).  We can show documents connected side by side for intercomparison, with their connections visible.  We can see quotations connected to their origins.  And much more.


These first 1960 musings about on-screen documents led to several big hypotheses:

1.  The computer screen may completely replace paper, and we have to plan for that possibility.  We can provide new documents and facilities never before seen (as above).  But that's just the beginning.

2.  We have to design a whole new literature for the screen-- transposing and redesigning all the things we do already in the world of paper.  We will need mechanisms corresponding to all aspects of paper literature: writing, annotation, publication, presentation, scholarship, archiving, preservation, librarianship.  All these must be rethought and remapped to this new world.  Some may become simpler, some may not.

3.  Even with so many requirements, this needs to be a simple universe, with few and simple principles.  Furthermore, the design must be generalized to apply not just to text, but to audio, video and movies as well.

4.  The copyright laws will not change, and must be recognized in the system design.

This means ownership and sale of content.  There will be rightsholders (authors, publishers) and content purchasers.  Somehow royalty payments to each author/publisher must be automatic, to simplify and lubricate transactions.  (As a writer and movie-maker, I knew that artists had to be rewarded.)

5.  And who will pay and how?  As in all aspects of life, economics will be fundamental; there must be a fluid system of commerce.

The deep-document system designed from these ideas is still an ongoing project; we won't get into its various names here.

The overall design of this system took far longer than I hoped (some two decades, working with several brilliant colleagues).  So far it has been politically difficult to implement and upstaged by the World Wide Web.  However, people now recognize that many necessary things are impossible on the Web-- clean side-by-side intercomparison, side-by-side annotation, two-way links, clean payment for content.

(I would like to say much more, but this note is about micropayment.)


While mechanisms of content sale are important, first let us consider the UNIT of content sale.

Some think, for instance, that the user should buy a subscription to an on-line journal for a year or a month.  That is an unacceptably big commitment in today's noncommittal world.

Others think content should be sold by the article.  But that, too, I would argue, is too much.  What if you discover, in the first paragraph, that it's not what you wanted?  And today's hyperactive users want to skim and jump.  They should not be burdened paying for the parts they don't get to.  (Indeed, skimming and skipping have always been important secrets of being well-read.)

Should the unit of sale be the paragraph?  No.  Paragraphs come in very different sizes.  The sentence?  Ditto.

Should the unit of sale be some fixed number of characters, like a hundred or a thousand?  No again.  There is no need to fix any arbitrary unit.

Let me propose a simpler and more sweeping idea.  Sell content by the arbitrary piece-- charging for whatever length of portion the user sends for.  (Fully analyzed, this actually means selling by the character.)

Is this crazy?  It is no more difficult than selling other units, and solves a number of problems.

Here's how it should work.

(Note that the sale method must be smooth and non-intrusive; this is an interface issue.  Steve Jobs has shown with Itunes that people will buy content, if it's easy, smooth and cool.)

Publishers place source content on special content servers.  The source content can be anything from finished pieces to manuscripts and raw notes.  The source contents do not change, so the addresses of content do not change.  Let's call these source units "content scrolls".

A publisher sets a price per character on a given content scroll or makes it free ('free' means setting a price of zero).)

When you click to get a document, first there comes an empty frame and a list of the content portions.  (So far no payment.)  Now your viewer program sends for each portion separately (just as today's browser brings in pictures from all over to make one busy illustrated page).  Each portion is delivered as soon as payment is assured.

Each portion is sold from where it sits in its original content scroll.  Each downloaded portion, no matter how small, is paid for according to size (the number of characters) and the price per character.

There need be no minimum download, since accounting can be to the millionth of a cent (now we're talking MICRO!)

Of course you don't get a portion or see it unless you pay for it.  But you can skip downloading any portion and thus not pay for it, since (if you want) you can see its price, origin, or size.

(You should also be able to set a threshold saying "if I click and the cost is under X, just buy it.")

The portions retain their identity-- source addresses-- so that if a reader has already bought a portion, it's just pulled from where it already resides in the reader's cache memory.

Having the source address also means the user may follow any quotation to its origin with just a click.

Of course a document so distributed may be one big portion-- a whole consecutive source article-- or built up from many portions of different sources.

"But this won't work on the Web!" you say.  We'll get to that.


Though copyright is now a huge public issue, people usually discuss only one aspect (unauthorized grabbing of content), and that in polarized, Manichean terms: downloading thieves (or liberators) versus copyright defenders (or trolls).

A key issue not being discussed is re-use of content-- a fundamental aspect of conventional publishing.

In the paper world it's tough.  If you've ever tried to publish an anthology, produce a documentary film, or publish an article with long quotations in it, you know the enormous complication of licensing the re-use of copyrighted material.

Publishers are accustomed to making copyright deals with other publishers, in advance, for a certain press (or production) run.  This is a bureaucratic and confrontational hassle of negotiation, contracts, payment, publication of a pre-negotiated quantity (or venue)-- this is a draggy system of licensing that requires tight predicting of press runs and sales.

Now comes the beauty part.  The whole negotiation issue can be finessed within the system described.  Each publisher is exactly rewarded for re-used or quoted material, but no negotiation among publishers is required; the re-used material simply comes from the servers of the original publishers.  (The Transcopyright permission method gives this a legal basis.)


Forty years ago, even twenty years ago, these ideas sounded insane to people.  But I have grown accustomed to the grudging vindication of my ideas one by one.  People sneered when I said there would be a big market for personal computers.  People laughed at the notion of world-wide hypertext.

Now perhaps people are ready to see other facets of the original idea, including micropayment, but will it be the real deal?

What I am proposing is a different document format-- a compositing format for serious document work and intercomparison, as well as for salable content.

The deep document structures I propose, and their different linkages, views and payment, have been prevented by the present methods of the World Wide Web, based on its viewer standard-- called the Web browser, based on computer traditions.  I've argued about this with Tim Berners-Lee (whom I like and respect), but he is locked to his traditions.  He created the original Web simplification of hypertext and now controls the Web through the browser standard.  It will not change.

However, recently there have been breakthroughs in viewing methods that bypass the Web rules-- YouTube, and the view for veiled content offered by Amazon and Google.  These work in the Web frame but outside the conventional Web page.  This may be the best approach to finally getting serious documents.  It makes the system "shovel-ready", in today's fashionable term.
Progress involves back-and-forward steps.  Old thinking often takes a while to come alive in new minds.  Back to the future.