Professional Bio

Eric Armstrong

This document has the following sections:

See Also: Publications and Ideas.

TimeLine

Professional Background

In the 70's, I was attending Ohio State University. I minored in Philosophy, Political Theory, Systems Theory, Mathematics, and Psychology, rolling them all into a degree under the heading: "Techniques of Analysis: Methods for Defining and Resolving Problems". (My middle name is "eclectic".)

The systems theory work occurred in a variety of departments, since many departments seemed to have one course devoted to a systems theoretic view of their subject matter. It was in this courses that I was exposed to the concepts of feedback loops and unpredictability of large, complex systems. (Forrester was particularly clear in this regard, as I recall.)

I was exposed to computers during my work in the Political Science department (where I mainly concentrated on Political Theory). It was then that I saw the computer's potential for simulating complex systems in order to help us anticipate otherwise unforeseeable reactions to policy interventions and figure a way out of the kinds of messes that were already on the horizon -- problems which are even more acute today.

So the reason I got into computers in the first place was to use them as a tool to augment human reasoning.

After doing graduate work in computer science, I experimented with "artificial intelligence", in the form of a game program that did a tree search. It implemented alpha/beta pruning and original evaluation heuristics. It performed quite well in international competition: It was the first program to extend the search horizon for forced moves (Hans ____ at Carnegie Melon later wrote a paper on the subject). It was also the only program to "play like a human" after securing an initial advantage. (Most programs at the time worked hard to get a good position, but were incapable of effectively capitalizing on it afterwards.)

As a result of the forced move searches and its unique heuristics, the program managed to completely "wipe out" its opponent on two occasions -- the only tournament wipeouts ever recorded in competition. (One of them was against David Levy's entry. David, famous for his $10,000 bet that no computer would beat him at chess by 1980, was running a program written by one of his company's programmers -- a program that happened to be using the same algorithm that early versions of my program had used -- and that later versions had been designed to beat.)

At bottom, however, the program was still fundamentally a "brute-force" program -- which meant it was not "reasoning" in any form that we could hope to learn from or use to improve ourselves. Despite its significant accomplishments, therefore, it was still profoundly "unsatisfying", because it had not achieved it's primarily goal of augmenting *human* intelligence. (I had originally hoped to use the playing strength it developed to produce an infinitely patient teacher -- but that vehicle could not be built on a brute-force chassis.

It was around this time that I began to value the concept of "man/machine symbiosis". It struck me that one way to arrive at good symbiotic mechanisms was to set up some sort of a war game where one computer acted as the playing ground, and other "players" (who might consist of a man, a machine, or a combination) competed against each other. It struck me that the right combination of strategic thinking by the human and detailed execution by the machine would probably produce the winner, and the game would provide the means for identifying just which method of interaction was the best.

After doing the artificial intelligence work, I wound up at Data General. While there, I wrote a multi-tasking library in assembler and an execution-time profiler for programs. The last year of development on the AI program had been devoted mostly to performance optimizations. That experience, coupled with the systems optimizing I had been doing out in the field, eventually led me to participate in their Major Opportunities division, where we gave presentations, ran benchmarks, and undertook any technical task necessary to close large business deals.

Because of my interest in computer languages (at last count, I had learned some 20 of them), I wound up in a 3-week project in which we had to produce a complete set of demos for a new computer system, for which the operating system was not yet ready! We used Forth, and I became close friends with the 4 other fellows who participated in that project.

As the deadline loomed, we were there pretty much around the clock, so we got to know each other pretty well. I still vividly recall the final moments before the exhibition. The marketeers were out on stage giving their slideshows. When the finished, the panel behind them would swing up like a garage door, exposing all the new computers and all the demos we had written. While they were speaking, we were sitting in that little room, frantically putting the finishing touches on the code! The last of us (me!) finished without about 10 minutes to spare. Then the door swung up, the customers came in, and we proceeded to give the demonstrations.

During this time I also wrote a small voice mail application in C and a terminal emulator for the client side of an early client/server prototype. The company sent me to Toronto for a week to learn Prolog. When I got back I prototyped a small "Expert" Expert System. The system kept track of who was knowledgeable in what subject areas, and kept track of subject-area relationships, so that the "closest reasonable" expert on a given topic could be found.

While at Data General, I began looking a directory trees as hierarchical data structures, and realized I could use those same constructs to organize the large volume of action items for my software projects. While working on one module, I would typically have ideas regarding several other modules -- the hierarchical structure gave me an easily- manipulatable outline I could use to record the action item. Then, when I started working on a module, all the ideas that had occurred while working on other modules were right at hand.

I did one prototype using directory names and command scripts, and latter a second prototype using Basic and atomic file manipulations. Eventually, I started a company to develop outliner programs. (The goal, as with the bootstrap project, was to build productivity software, use it to be even more productive, and take advantage of the positive feedback loop!) Our eventual result, StreamLine was a technical triumph, but a marketing disaster. It was good a general purpose tool, but did not solve any one problem so completely as to be compelling. I learned an important lesson from that.

I vividly remember my experiences when trying to communicate the value of an outliner. I would explain to people and they would reply "But I can do that with a text processor." It was almost impossible to make them realize just how much more easily they could do it in an outliner. Eventually, I'd get some to try it. Then they would say, "This is great! I don't know how I got along without it!"

It's humorous in retrospect. But it was also frustrating, because the very next person I spoke to went through the same reactions. And the next one. And the one after that. I could never seem to get the idea into their heads! (Eventually, I got good at doing that, but it took 2 years before I figured out how to get their attention in 30 seconds, pique their interest in a couple of minutes, and get them to sit for a demo.)

I did some consulting work at Sun during this time period, and wound up reengineering a voice mail system for a subsidiary of Data General in Dallas, Texas. That experience taught me the value of keeping a design journal, and brought vividly to light the importance of being able to answer "Why?" when starting a page after page of "mystery code".

After moving to California, I worked at Oracle, where I became a writer. The patient editors there poured so many tons of red ink on my early manuscripts that you could barely find the text. But they had found that it was easier to train a programmer interested in writing than it was to teach a writer who didn't understand programming. The collaboration worked, and I became a writer.

I then had a stint authoring help systems. Again, the programming experience was helpful, and I began to get good at communicating information in a hypertext medium.

By this time, I was looking forward to getting back to programming. When the opportunity came up to author a book on Java, I jumped at it. The result was The JBuilder 2 Bible, which taught Java from the ground up, using an an IDE. It was designed as a community college textbook, as well as a self-help book. It got *great* reviews from people learning the language -- the people it was aimed at. Unfortunately, the editors' decision to call it a "bible" caused hard-core techies to pick it up and come away dissatisfied. (I begged them to change it, but they had the final say). I also wrote several articles for JavaWorld during this time period.

At the moment, I'm contracting at Sun, where I authored the XML programming tutorial at java.sun.com/xml, and where I also wrote several applications to automate the document production and localization processes. (The tutorial has also received great reviews. Several people said that it is the best thing they have seen on the subject to date.)

"Extra-Curricular" Projects

Other projects I'm working on (only one at any given time):

Personal Interests

 

§  Home ·  Health ·  Software ·  Dance ·  Essays ·  Links  §
www.TreeLight.com