In
April of 1964 IBM announced the availability of a new line of computers – the IBM
360. This was a pretty major announcement in many ways, but the key was that
this was to be a “family” of compatible processors which could all run the same
programs. Prior to this you would have one processor to run “business” programs
(such as the IBM 1401), and another to run your “scientific” programs (such as
the IBM 7090). The IBM 360 had an instruction set that enabled both. Also, the
various processors were of many different sizes so that as a company grew (or
its use of computers grew), you could continue to run the same programs without
having to rewrite them. The first IBM 360’s were shipped in 1965.
By
the time I worked at Uniroyal in the summer of 1968 they had three processors
in their data center – a model 30 with 64K, a model 40 with 128K, and a model
50 with 256K (a max-size model 50 was capable of supporting 512K). Note that “K”
is roughly 1000 bytes (1024), so for those of you who are used to measurements
of mega-bytes (a million), giga-bytes (a billion), or tera-bytes (a trillion),
these may seem like impossibly small machines compared to what we now have. But
the rule of thumb for memory back then was that a million bytes cost a million
dollars, so that the memory alone on their model 50 was a quarter of a million
dollars (in 1960s dollars), or the annual salary of 20 college graduates. The
processing speed of these three machines were about 30kips (that’s 30,000
instructions per second), 75kips, and 150kips. [For comparison, the latest
Intel i5 processors run at a clock speed of 3.5Ghz or over 100,000 times the
speed of the model 30.]
The
smaller machines would run only a single main program of about 40K (in what was
known as the background partition), plus a small foreground partition of just a
few K (the remaining memory was used for the operating system). This foreground
partition was used for things like PUPPIT (see section on printing). The larger
machine could run up to four programs of about 50-60K plus two foreground
partitions and the larger operating system needed for that configuration.
Memory Size and Overlays
Unbelievable
as it may seem today, that amount of memory was pretty impressive at the time.
The prior generation of computers, the IBM 1400 series, started at 4K (which
occupied a cubic foot of individually wrapped cores) and maxed out at 16K. So
even our smaller model 30 was four times larger than the largest 1400 series
computer.
When
you compiled a program (in COBOL, which was itself fairly new, having only been
defined in 1960), you had to declare the memory size you wished to be able to
run in and it would check to ensure that the object code generated would fit.
If you wished (as some of us did) to run a program which was larger than that
you had to define “overlays”, i.e. break the object code up so that parts of it
could be dynamically loaded at one point in the program’s execution and
overlaid by another part of the code at a different time in the execution. This
required very careful planning and coding, but it enabled you to run a program
which would not all fit in memory at the same time to still be executed. [Nowadays,
most operating systems use some form of “virtual” memory to do this for you and
they are constantly switching out your program for someone else’s without you
being aware of it, but back then you had to do this on your own.] It took a
fairly seasoned and experienced programmer to write those large programs and
properly manage the overlay processing so that you weren’t constantly switching
in and out the various overlays.
No comments:
Post a Comment