An interview with the power behind Robot Wisdom, Jorn Barger.
Conducted via email by John S. Rhodes (27-Sept-99)
What is Robot Wisdom? What is your role?
I spent the 1970s looking for a way to study human psychology without betraying either the scientific method _or_ the human spirit. Around 1978 I conceived a paradoxical-sounding model, of building computer simulations based on literary descriptions of human behavior, and coined the term 'Robot Wisdom' to encapsulate that paradox.
I continue to pursue that goal, but the sticking point now is an elegant core-definition of human nature-- the simplest possible universal human story.
Even before there was a Web, tens of thousands of us lived on the Net, via mailing lists and newsgroups. Which meant that the Net was what we thought about all day, when we had the freedom-- the debates, the flamewars, the new friends, the opportunities for self-expression.
I was a very, very late adopter of the Web, not switching from lynx (text-only, Unix-based) to Netscape until late 1997. But by that point the Web had grown into a vast impenetrable treasure cave, generally in pitch blackness. I desperately wanted someone to 'turn on the lights' so I could see what was where, what treasures were there for my enjoyment.
So I determined to take on that task for a while-- to devote full time to lighting up the dark corners, building my "Net.literate" portal, and keeping up a running commentary in my weblog.
You can't gauge Internet behavior without operating on Internet time!
I can post a human-factors poll on my weblog and have 200 answers within 24 hours. I can watch my server-logs and see the next day how effective a new link was, how attractive a new page was.
I can do experiments as fast as I can think of them. No lab required, no advertising for volunteers, no months of waiting for journal publication.
But if I was just watching from the sidelines, my polls wouldn't begin to attract the required interest. They only work because I'm fully engaged in the r/evolution.
Hypertext design theory has never recovered from its origins in pre-Net technologies, because netlag changed everything. Dividing up a document into lots of short pages makes great sense if they're all on your local drive, but the real Web isn't anywhere close to that, and probably never will be. Yet Jakob Nielsen goes on saying surfers would rather not scroll!
Selling links is a different sort of fluke-- Deja.com and AltaVista and many, many other popular sites are viewed by marketing droids as opportunities to distract people, so they load pages up with banner ads and pathetic links to sponsoring sites. Which *subtracts value* for the user, instead of adding value. Very clueless!
XML is a very complex story to summarize, but it fails miserably as AI, as HF, and esthetically as well. XML is fine for database work, but documents in natural language really aren't anything like databases.
The notion that text _styles_ reflect text semantics is 99% false-- the styles are mostly ways of grouping diverse text elements, and varying their levels of emphasis. They're totally context-dependent, so forcing them to be linked to semantic tags is infinitely inefficient.
Nor do you solve any important AI problems by tagging text-elements in the body of the document-- it would work much better to summarize the content in a META header, but we still don't have the universal indexing scheme required to do this usefully. So in this, XML resembles a 1960s-era cross-your-fingers-and-hope-for-the-best NLP project.
Nor, from an HF standpoint, can you realistically upgrade existing HTML by creating a non-compatible standard. Nor is XML going to be convenient for anyone but specialists to use 'properly'.
The W3C standards group has no real expertise in any of these areas, so their untested recommendations should be viewed with extreme skepticism. (Just look at their unreadable website!)
First, it's an urgent investment for _anyone's_ future, because it's an extremely complex activity to do well, yet trillions of dollars are being spent on it, at this point quite blindly.
At the simplest level, pointing and clicking is something a chimp can do, and basic HTML is something a schoolchild can do.
But mastering the Web even as a surfer is vastly complex-- what are the best sources, when do they publish, how can you track them, how can you minimize the inefficiencies of badly designed sites, how can you manage your growing list of bookmarks?
And mastering the Web as a publisher is a topic no one has scratched the surface of yet-- new strategies are discovered daily and if you're not running as fast as you can, you're already falling far behind.
Selling enough books to live on is all I'm really after. But my ideas are so unfamiliar I've had to 'build my brand' somewhat first. And the weblog has been wonderfully effective for this, because it forces me to focus almost entirely on more familiar topics instead of just Joyce and AI and hypertext theory.
The book version of my website should be out by the end of next year, self-published. It will be about one-fourth AI, one-fourth James Joyce, one-fourth Internet theory, plus various odds and ends. I'm keeping a list of email addresses to notify when it's done.
[Editor's note: Send Jorn an email with the words 'hardcopy list' as the subject or in the body to find out when his book is published.]
I hope that Web designers will remember not to break up articles over many pages, and that everyone else feels a little less guilty about spending big chunks of their days websurfing!
Read another WebWord.com interview: Web Usability: Past, Present, and Future
If you want to know when
new interviews go online,
Home | Moving WebWord | Cool
Books | Hot Web Sites | Reports
Newsletter Archive | Interviews | News | About WebWord.com
© 1999 by John S. Rhodes. All rights
Do not reproduce or redistribute any material from this document,
in whole or in part, without explicit written permission from John S. Rhodes.