Field of lupines

You Have Died of Dysentery:

The Making of the Oregon Trail

Chapter 1: The Evolving Technology of the Time

When we began work on a brand new version of The Oregon Trail in 1984, the world of educational computer software was in the midst of extremely rapid change. In fact, it is only because of these changes that MECC undertook the project. In a matter of a few years, educational computer software had moved from a complete focus on mainframe computers to a near-complete focus on microcomputers, such as the Apple II and the IBM PC. This revolution in the hardware of computing was soon followed by major changes in the design of educational computer software. It was in the midst of this tumult that we started our project.

Computers in the 1970s – A Personal Story

Throughout most of 1970s, work on educational computer software was done primarily on mainframe computers, and usually at universities. The term “mainframe” referred to any large, extremely expensive computer – usually more than a million dollars. The name arose because the central components of such a computer often resided in a large metal cabinet standing on the floor of a computer center. The biggest players in the mainframe industry at the time were IBM and Control Data. When I attended graduate school at the University of North Carolina at Chapel Hill, I did most of my work on an IBM System/360 – the most famous and successful mainframe computer on the market. When I attended graduate school at the University of Texas at Austin, I did most of my work on a pair of Control Data machines. Like most of the other people who used these same computers, I never actually saw the machines. These hugely expensive systems were locked in underground computer centers, operated and maintained by a team of professional technicians. I interacted with the computers through remote methods – punch cards, teletypes, and CRT (cathode ray tube) terminals.

Most mainframe computers operated on a timeshare basis. This means that the computer was set up to allow many users to run programs at the same time – thereby “sharing time” on the computer. There might be hundreds of simultaneous users, some submitting decks of punch cards and others sitting at teletypes or CRTs. Each user might be running a different program on the computer, or several users might be running different copies of the same program. In reality, the computer could only run one program at a time, and therefore the computer’s time had to be shared in little slices. Every user would get a tiny slice of time – usually less than a second – for their program to process a bit, and then the mainframe would give someone else a turn. But because of the fast and efficient way in which it doled out the slices of time, the computer usually responded to each input within a few seconds.

The most common form of communicating with a computer was with a deck of punched cards. I would go to a room where the keypunch machines were located, and wait my turn to sit at one of the keypunches. When my turn came, I inserted my deck of blank keypunch cards and began typing. With each letter or digit I typed, some holes would be punched into the current card. Each card represented one line of a computer program. If my computer program was 200 lines long, then I would need to prepare a deck of 200 punched cards. I would then go to a window at the computer center to submit my card deck to run on the mainframe. Sometime later, I could pick up my deck of cards and the resulting printout at another window. If I was really lucky, then my program ran without an error, and I would actually see the results. But, because I was writing my own computer programs, I would often submit the program dozens of times before it was perfected and free of bugs. Therefore, more often than not, the printout contained only error messages. I would need to make corrections to my program – by punching more cards, replacing the bad lines of code with new lines of code – and then try again.

However, there were other ways of interacting with mainframe computers. Back in 1971, when I was still in high school, I had the good fortune to attend a summer program in Georgia that allowed me to interact with three different mainframe computers – located on three different college campuses – by means of a teletype machine. Like a keypunch, the teletype machine had a keyboard. But instead of punching cards, every line of program code that I wrote was transmitted by phone line to the chosen computer – located more than 100 miles away. As long as I remained online, the computer would remember all the program lines that I had typed in. At any time I could try running the program in its current state, which allowed me to find errors in the program. At the end of my session, before shutting down, I could save my program onto a narrow strip of paper, called punch tape. The holes in the tape served the same role as the holes in a stack of punch cards, representing letters and digits. The next time I logged in, I could load the program from the punch tape and resume my work.

In grad school, I had to do most of my work using punch cards – a much slower method of communication than the teletype I had used in high school. But during my last year of grad school, in 1979, I was finally given access to a CRT – which was a bit like a TV, except that it could only display text. However, I could see 24 lines of text at a time. I could run programs in "real time" and see the results on the screen, instead of having to wait for a printout from the computer center. I was not allowed to connect to the mainframe with the CRT, but I could connect to a DEC PDP-11 (a minicomputer), where I could actually store some files. Furthermore, the DEC system included a line editor – a very crude type of word processing program – which allowed me to write my master’s thesis online.

Of course, not everyone who used a mainframe computer in the 1970s wrote their own software. For example, there was a popular program for statistical analysis called SPSS. To use this program, I still had to submit a deck of cards, but the cards contained my data instead of the program – one card for each line of data. At the front of the deck I would include a few cards that directed the computer to load the SPSS program, and that told SPSS what format my data was in, and what kinds of statistical analyses I wanted to perform on the data. I frequently used SPSS to analyze my own data, and because of my expertise in using the program, I also performed statistical analyses for other grad students.

My principal interest in graduate school was how I might use computers for educational purposes. But this was years before any university offered a degree in computer-based instructional technology. I was exploring a field whose invention was just getting underway. It was unclear whether I should major in computer science, education, or the subject areas that I wanted to teach. For my first year in grad school I majored in computer science, but the program was tailored for a career in business, not in educational media. So I changed majors – and graduate schools – and I cobbled together an unofficial interdisciplinary program that combined computer science, education, and my chosen subject area – plant ecology. I had to choose one of these three subjects as my official field of study, and so I joined the Botany Department. Of the three possible choices, this was the one that gave me the most flexibility to craft my own program.

In graduate school I became increasingly aware of the concepts of CAI and CBT – computer aided-instruction and computer-based training. Both CAI and CBT included three principal kinds of computer-based activities – instruction, practice, and evaluation. A fully developed implementation would divide a course of instruction into many separate modules, and the student would need to progress through all of the modules. For each module, the student might be exposed to all of the following stages:

  1. A pre-test, to see if the student already knows the material in this module and can skip the module entirely.
  2. A tutorial which provides instruction to the student. In practice this was often just a short text, like a chapter in a book – but theoretically it could be more than that.
  3. An opportunity to practice the concepts, sometime called “drill and practice”. For example, in a math lesson, the student would be given a series of math problems. Ideally, any time a mistake is made, the student is given helpful feedback.
  4. A test, to see if the student has mastered the material and can graduate to the next module in the sequence.
  5. Remediation if the student failed the test. Ideally, this is not just a repetition of the tutorial, but instead is personalized to address the specific difficulties that the student experienced with the test. After remediation, the student resumes the practice stage, and then tries the test again. However, the computer is able to present a different version of the test each time that the test is taken.

While all of this was fascinating, it did not represent the entire world of possibilities for the educational use of computers – and it certainly did not represent my own vision of how computers could best contribute to learning.

In the botany department, I soon realized that research scientists were using computers to create models of ecological systems. These models ranged from the small and tightly focused to the broad and sometimes elaborate. For example, on the small end of the spectrum was a study of the effect of egg clutch size in birds – the number of eggs that are laid in the nest at the same time – investigating whether greater variability in the clutch size might be advantageous to the species when the food supply is also variable. On the grand scale was a study of all the food energy passing through an entire ecosystem, categorized into levels of consumers – such as plants, herbivores, carnivores, and decomposers. Scientists found that such computer models could be very helpful for research purposes, allowing them to gain new insights, or in some cases to generate useful predictions.

I saw these models and thought that they could be useful for instructional purposes, to help students understand all sorts of ecological concepts and relationships. Instead of lecturing about these complex interconnections, I would let the students play with computer models and see the results for themselves. However, most of the research models were too complex for teaching basic ecology to undergraduate students. So I began to investigate how I might create simple, easy-to-use computer simulations specifically targeted to undergraduate students – especially to students that were not science majors. After creating several such computer programs, I wrote my master’s thesis on the topic – an interdisciplinary effort combining the fields of natural science, education, and computer science – and I was awarded my master’s degree from the University of Texas at Austin in December 1979.

Meanwhile, also in the 1970s, educators at other universities and institutions were preparing little simulations of their own – in political science, economics, history, and other fields – for use with students. These efforts represented a very different philosophy than the traditional CAI approach – giving the student the freedom to experiment, and to learn from that experimentation, rather than sitting passively through a lecture or reading a chapter of text.

As I was finishing my master’s degree, I was given a full-time 6-month temporary position (funded by a National Science Foundation grant) designing and programming computer-based educational materials for undergraduate science classes at the University of Texas. Half of my job was to create simulation models, and the other half was to design and program a computer-based system for authoring and administering quizzes and tests. For this job I was working on another minicomputer – much smaller than the PDP-11 – called a Data General Nova. All of my work on this computer was done via a CRT. Likewise, the users for whom I created the software all used CRTs. This system not only let me store data, it even allowed me to set up a data storage system that could be shared by multiple users – where I could control the access details of each user. One interface I created allowed the instructor to create multiple-choice test questions, and to view students’ test results. Another interface allowed students to take the tests, to see how well they did, and to review their past results. In short, I created different views for different kinds of users into a common set of data – where each user saw a specific subset of the data, organized in an optimal way for that particular subset.

During this period I was also exposed to computer games. Even though CRTs only displayed text, several early computer games were available by the late 1970s. In my limited spare time I would sometimes play ADVENT or ZORK, two early text-based adventure games. Eventually, I even wrote a small computer game myself. But unlike the adventure games of the time, which were single-player games, mine was a two-player game, with each player sitting at a different CRT. I was fascinated with the concept that two players might have different but incomplete views into a common set of data – and that this would serve as the basis of the game. After all, this is the exact basis of traditional card games – played without a computer. My feeling was that this concept could be applied to all sorts of games, even action games. My little game – intended only as an experiment – was called “Jump”, in which two players at different CRTs hop around an invisible grid, each trying to figure out from various clues where the other player is located. The first player to hop on top of the other player is the winner. I could have programmed it as a turn-based strategy game. Instead, I eliminated the requirement that the players take turns, allowing each player to hop as rapidly as he could manage. The game, even though very simple, was quite fun to play. But oddly, I seemed to lose to my opponent far more often than I won!

1977 to 1983: Seven Years That Turned the World Upside Down

In the seven-year period from the beginning of 1977 to the end of 1983, the world of computers turned upside down. I was in the midst of it, and I could see that rapid changes were occurring, but the significance of these changes was not fully apparent to me until sometime later.

At the beginning of 1977 I was still in my first year of grad school, and my major at that time was computer science. To me, the word “computer” meant a mainframe computer, like the IBM System/360 that I was using for most of my class assignments. I could see certain changes afoot, but I had no inkling that these changes would soon destroy the world of mainframes and usher in the world of personal computers. However, I could see that calculators were changing the way that we all worked. In 1970 we all still used slide rules or log tables to perform complex arithmetic, but now, at the beginning of 1977, it was quite common to own a pocket calculator. I also knew a little bit about the invention of microprocessors – such as the 8008 in 1972, the 8080 in 1974, and the 6502 in 1975 – but I was not yet sure what kinds of devices might eventually make use of these chips. I was quite aware of early (and crude) electronic games such as Pong, but I had no idea where this might lead. And I was aware of the surging minicomputer industry – producing computers such as the DEC PDP-11 and the Data General Nova – but I did not see it as a serious threat to mainframes.

At the very end of 1976 or the very beginning of 1977 – I don’t remember which – my classmates and I in grad school were given an assignment to program a KIM-1 board. This device was an integrated circuit board onto which were attached several chips – the main chip being a 6502 microprocessor, along with some RAM chips to provide memory. The only input device was a small hexadecimal keypad that looked like a calculator. The only output was a 6-digit hexadecimal LED display, directly above the keypad. In terms of the output, the KIM-1 wasn’t even as sophisticated or as useful as a handheld calculator. The device looked like this:

Unless you attached the KIM-1 to another device, then the only way to use the KIM-1 was to type in a computer program on the keypad – using hexadecimal machine language. I couldn’t even use assembly language, much less a higher-level language like PL/I, FORTRAN, or Pascal. Therefore it was a long and tedious process to program the KIM-1. However, in my class assignments, I had already done machine language programming on a variety of mainframe computers – both modern and historical – so the actual programming was not much different. The difference was that the entire computer was sitting on a single board. The 6502 microprocessor had taken the place of the entire CPU assembly on any of the other computers I had programmed. The RAM chips had taken the place of the entire core memory assembly on any of the other computers. This miniaturization was absolutely amazing.

And yet I still did not get it. To me, the KIM-1 was just a geek toy without any real value – not a window into the future. I wanted to be able to program in a higher-level language. I wanted to be able to run applications, such as SPSS or a line editor. I wanted to be able to store my programs between sessions on a hard drive, instead of having to type in my program from scratch every time I powered up. I wanted a regular alphabetic keyboard instead of a hexadecimal keypad. And more than anything, I wanted a good system for multiline alphanumeric output, such as a CRT screen. The KIM-1 had none of these, so I just couldn’t see the point of it. I could see that microprocessors and RAM chips were important advances, but all I could imagine was using these chips in a compact version of a mainframe. In short, even though I had a good imagination for certain things – such as using simulations for instructional purposes – I had no imagination at that time for the future of computer hardware.

I didn’t have to wait long for other people to start building the future. During the next seven years the world of computers turned upside down as inventors created increasingly complex devices based on microprocessors. The list below names a few of the important devices that were introduced between 1977 and 1983. In parentheses is the name of the microprocessor chip family – which may differ from the exact model name of the chip:

1977 – Apple II microcomputer (6502); Radio Shack TRS-80 microcomputer (Z80); Commodore PET microcomputer (6502)

1978 – Atari 2600 home video game console (6502); Space Invaders arcade machine (8080)

1979 – Apple II Plus microcomputer (6502); Atari 400 and 800 microcomputers (6502); Texas Instruments TI-99/4 microcomputer (TMS9900); Galaxian arcade machine (Z80); Asteroids arcade machine (6502)

1980 – Commodore VIC-20 microcomputer (6502); Pac-Man arcade machine (Z80); Battlezone arcade machine (6502)

1981 – IBM PC microcomputer (8086); BBC Micro microcomputer (6502); Osborne 1 portable microcomputer (Z80); Defender arcade machine (6502); Donkey Kong arcade machine (Z80)

1982 – Commodore 64 microcomputer (6502); Kaypro portable microcomputer (Z80); Compaq Portable microcomputer (8086); Pole Position arcade machine (Z80); Tron arcade machine (Z80)

1983 – Apple IIe microcomputer (6502); Nintendo Entertainment System home video game console (6502); Mario Brothers arcade machine (Z80)

NOTE: The first Apple Macintosh went on sale in January 1984, just a few days after the end of this 7-year period.

By the end of 1983, the world of computers looked completely different than it had just seven years earlier. The market for mainframe computers was in freefall as large and medium sized companies realized that it was much cheaper to buy and maintain minicomputers. IBM’s entry into the microcomputer market in 1981 legitimized personal computers in the eyes of many doubters – and now small businesses could use PCs to do word processing, run spreadsheets, and track their accounts. A home market for personal computers had taken off, and now anyone could own their very own computer to do serious work or to play games. Microprocessor-based arcade machines had become hugely popular, and games like Pac-Man had become a massive cultural phenomenon. And a market for home video game consoles had also taken off, first dominated by Atari, and then by Nintendo. The world had completely changed, and was continuing to change at a rapid pace.

The Evolving State of Apple II Software

The Apple II first appeared on the market in 1977, replaced in 1979 by a slightly improved version called the Apple II Plus (or II+), and replaced again in 1983 by a still-better version called the Apple IIe. The Apple II Plus, compared to the original Apple II, included a more powerful version of BASIC called Applesoft BASIC, and it supported the use of floppy disk drives. It also included an improved “hi-res” graphics mode, which eventually led to major changes in the way that people designed and created Apple II software – although it took several years for these changes to happen.

Before the creation of the first microcomputers, there was obviously no home market for computer software. Even after 1977 – when the Apple II, the TRS-80, and the Commodore PET all became available – it took a few years before commercial computer software for these computers began to appear in stores. In the meanwhile, hobbyists who had purchased these machines scrambled to find software. In some cases, they wrote their own software. Sometimes they traded with other hobbyists. But the most popular approach was to buy magazines that included type-your-own programs listed on the pages. At the same retail stores that sold microcomputers, you could find all kinds of magazines targeted to owners of various brands of computers, all filled with programs that you could type, save, and run. Not surprisingly, a great number of these programs were games.

When commercial software for microcomputers began to appear, the main emphasis was on business software. In 1979, the concept of the electronic spreadsheet was born when Dan Bricklin created a program for the Apple II called VisiCalc. This was considered to be the first “killer app” for a microcomputer – the first application that was so compelling that it justified the purchase of the computer. Once people began to purchase microcomputers to run VisiCalc, they also became interested in word processing software – such as Electric Pencil, WordStar, and WordPerfect. Buying such software was a much cheaper option than buying a dedicated word processing system – such as those sold by IBM and Wang. When the IBM PC was introduced in 1981, it soon became more popular than the Apple II for business applications.

From 1977 to 1981, most educational software for the Apple II was exceedingly simple. There were several reasons for this:

  1. A commercial market for educational software had not yet developed. Most of the existing educational software was being distributed free of charge, created by individuals or public institutions, rather than for-profit companies. (For example, MECC was still a public agency in 1981 – a unit of the Minnesota state government.) If you visited a store that sold microcomputers in 1981, then you would find some business software for sale – such as VisiCalc and several word processing programs – and you might even find some games, but no educational software.
  2. Most of the people creating educational computer software were amateurs – teachers, students, and hobbyists – programming in their free time. Professional software designers and engineers had not yet entered the market. (In late 1980 I tried very hard to find a full-time professional position designing and programming educational computer software. I visited universities all over the eastern half of the U.S., and at each school I was told that such a position did not exist anywhere. Therefore I felt quite lucky to be hired by MECC in 1981 as the first full-time professional Apple II developer in the organization.)
  3. As of 1981, almost all of the educational software for the Apple II was written entirely in BASIC, either Integer BASIC or Applesoft BASIC. Thus the software was limited to what could be done in BASIC – which was indeed quite basic.

But in 1982 some tiny private companies began to create and sell educational software for the Apple II, and this software began to appear in retail stores. This trickle soon became a flood, and by 1984 a huge number of companies had entered the market, including some giant media companies. The entire competitive landscape changed completely, and the new software titles became far more sophisticated and professional in appearance. By 1984 it was no longer acceptable to sell software that had been created in 1980, or to sell software that looked like the products from 1980.

The big difference between 1980 and 1984 was that companies were no longer held back by the limitations of Applesoft BASIC. Instead, by 1984 all commercial-quality educational software was created by mixing Applesoft BASIC with assembly language programming – or in a few cases, creating the entire product in assembly language. Assembly language programming was far more difficult and time-consuming than programming in BASIC, but it provided full access to all the capabilities of the computer. Any product that was created entirely in BASIC suffered from two major limitations – obvious to anyone who used the software:

  1. If an Apple II product was written entirely in BASIC, then the screens were usually presented in text mode. Therefore most screens had no graphics whatsoever, and the text was presented entirely in upper case. The Apple IIe, introduced in 1983, finally supported lower case text. But for many years there remained a great number of people and schools that owned older versions of the Apple II. To meet the needs of this existing user base, software had to work on the older machines, which precluded the use of features that were only available on the Apple IIe.
  2. If a program written entirely in BASIC used the “hi-res” graphics mode, then it had to use “shape tables” to draw pictures on the screen. These graphics tended to be crude, drawn as simple white lines on a black background. Furthermore, shape tables were very slow to draw, causing any attempt at animation to be extremely slow and crude. (The 1980 Apple II version of OREGON was written entirely in BASIC, and therefore the graphics in the program used shape tables.)

The concept behind a shape table was a bit peculiar. Imagine threading a sequence of small white beads onto a long black thread, then placing the string of beads on a black tablecloth. Now arrange the string so that the sequence of beads takes on a shape. A shape table was just like that string of beads – a sequence of dots (usually white) on a black screen. The first dot could be placed anywhere on the screen. The second dot needed to be in one of the 8 adjacent positions to the first dot (up, down, left, right, or one of the 4 diagonal directions). The third dot needed to be in one of the 8 adjacent positions to the second dot, and so on.

Example of a shape table

It was a laborious and time-consuming process to create a shape table, and the resulting graphics tended to look rather crude. Furthermore, whenever a program containing shape tables was run, it could take several seconds for a complicated shape to draw on the screen. This made it difficult to do animations unless the shapes were quite small.

Almost every educational Apple II program released from 1977 to 1981 suffered from the two limitations listed above. But by 1984, virtually all new Apple II programs made use of two new approaches, made possible by the use of assembly language tools:

  1. Instead of printing text in text mode, all text was printed on the hi-res screen. If a company licensed or built a good hi-res text engine, then it could print text anywhere on the screen, positioned to the exact pixel. Furthermore, the text could be mixed with graphics. The text could be in any size, in any font style. And best of all, the text could include lower case letters, even on older machines that did not support lower case letters in text mode.
  2. Instead of using shape tables to display graphics, all graphics were displayed using “block images”. A block image is a pre-stored rectangular image of any size, in color. Every pixel within the rectangular block is assigned a color. Block images were created using a “paint program”, and then stored on the Apple II disk. A good image engine could then display such an image anywhere on the hi-res screen. A typical screen in a 1984 product might include several color images along with hi-res text.

To write a program entirely in assembly language was very difficult and time consuming, compared to writing a program in BASIC. But Applesoft allowed you to mix the two languages, through a technique called “& hooks”. You could write most of the program in Applesoft, and then use an “& hook” to call an assembly language subroutine whenever you needed to display a paragraph of text or an image on the screen. It wasn’t even necessary for the Applesoft programmer to know any assembly language. Once a programmer had access to a good set of these “binary” subroutines, then all he needed to know was the proper syntax to send commands to the subroutines.

So by 1984, block graphics and hi-res text allowed Apple II programs to have a much more sophisticated look than they did in 1980. Therefore the best Apple II software in 1984 looked completely different than the best Apple II software in 1980. (See Chapter 3 for a comparison of MECC products in 1980 and 1984.) But the differences went far beyond the appearance of the screens. The industry had completely transformed in those four years. In 1980 all the software was still being created by part-time amateur designers and programmers. In 1984 most of the best new software was coming from commercial ventures that employed teams of professional designers and programmers. Therefore all aspects of the software – not just the graphics – were becoming more elaborate and sophisticated. The bar was quickly getting raised higher and higher, and the expectations of customers (especially in the home market) were rapidly rising as a result.

*** End of Chapter 1 ***

Leave a Comment

Don't be shy about leaving a comment! Your message will be sent privately to Philip Bouchard, and will not appear on this page. Any feedback is appreciated!