This page describes some of the work I did in the early days of the World Wide Web (1993-1995). In many of these cases the emphasis was on providing services to make existing data collections available to web users.

The Robots Exclusion Protocol

I developed the `robots.txt` "Robots Exclusion Protocol", currently used by over 500 million websites. See and Robots.txt is 25 years old for the history.

The LWP libwww-perl library

Back in 1994/1995 I restructured Roy Fielding's libwww-perl as a Perl 5 object-oriented library, collaborated on it with Gisle Aas (see history), before he took over ongoing development and maintenance. The library is still around (and popular) and there is a book.


I contributed in the HTTP 1.1 specification development, and am quite chuffed to see my name in RFC2616. But I can't actually recall what I did to warrant that, I don't think it was anything major.


ALIWEB was a search engine based on automated meta-data collection, for the Web.

This system was presented at the First International Conference on the World-Wide Web, Geneva, 1994. Here is the paper: aliweb.pdf and ACM citation.

Note that I have nothing to do with It appears some marketing company has taken the old aliweb code and data, and are using it as a site for advertising purposes. Their search results are worthless. They claim to have trademarked "aliweb", but I have been unable to confirm in patent searches. My recommendation is that you avoid them.


Probably the most popular of the services listed here, ArchiePlex was Web interface to Archie, a search engine for FTP sites that pre-dates the Web.

This service was implemented as a script that used an existing command-line client to query external databases. Several sites around the globe offered this service to their users.


CUSI was a simple tool that allowed you to search different search engines in quick succession, without having to re-type your keywords.

Since it was based on a simple script that redirected the browser to the search engine, this service was made available on a number of different web sites.

The Macintosh Catalog

The Macintosh Catalog was a Web search interface to the University of Michigan Macintosh public domain archive.

The RFC and Internet Drafts search engines

The RFC and Internet Drafts search engines provided a searchable Web interface to the IETF's RFC and Internet Drafts databases.

The Perl FAQ

This service was based on a script that processed the Perl FAQ (as plain text) from comp.lang.perl.misc, and transformed it into a collection of linked HTML pages.