Tuesday, September 29, 2015

File Access Techniques

In my last blog my youngest brother commented, “Now you need to do a post on the transition from sequential access (mag tapes) to FAT access (or whatever they called the earliest version of non-sequential access).” While that might seem like a simple request, there are a lot of technical details that will make this blog entry fairly long and complicated.

However, before I get started, I think that it’s interesting to note how this question illustrates the rapid rate at which computing technology has changed. Even though my brother is only 10 years younger than me, he uses the term “FAT access” to describe non-sequential access. The term FAT is from PC technology and represents the computing age that he grew up in. But my computing memories are from the earlier mainframe era of the late 1960’s when person computing was not part of our vocabulary – or even of our vision of the future.

I’m going to try to address this topic by writing about three different file access methods, going in to a bit of technical detail on each, then finishing up with a couple of true stories from my early computer days that illustrate how one can get into trouble in dealing with files.

Sequential (Tape) Access

The standard magnetic tape when I entered the computer field was ½” wide and was on reels measuring either 1200’ or 2400’ long. The recording density in those days was either 200bpi (bits per inch), 556 bpi, or 800 bpi (later improvements added 1600 bpi and 6250 bpi). There were either 7 or 9 tracks across the tape (depending on the type of computer). At low density, a single 2400’ reel had a capacity of 5MB (200 bpi x 12 inches/foot x 2400 feet) – not very much in terms of today’s media, but the equivalent of 36 boxes of punch cards (2000 cards/box), a considerable compression ratio. It was possible with a magnifying glass to see the individual recorded spots on these low density tapes and to manually “read” the contents.

However, as with all physical media, it is not possible to match the processing speed of the computer with the speed of the device. Therefore, we must employ some method that allows us to manage the differences in speed. There are two primary methods with magnetic tape.

Blocking – Tape drives cannot start/stop “on a dime”. The tape is light so inertia is pretty low, but even so to go from reading speed (100+ inches/second) to zero and then back up to full speed means that we need to have a certain amount of “blank” space between the blocks of data. This IRG (Inter-Record Gap) was .75”. At 800 bpi, the equivalent of a punched-card-worth of data can be stored in .1”. That would mean that we have only .1” of data followed by .75” of gap, reducing the actual capacity of the tape to only 600K (4 boxes of cards). Therefore, we want to store multiple records together before having each IRG. If we increase the size of each data block from 80 to 800 we have 1” of data for every .75” of IRG. This also reduces the wear and tear on the tape (and the tape drive) as we are not having to stop/start for every record, but only every 10 records.

But one might logically ask, why not just make the blocks really big (say 10” or 8000 characters long). The answer is that we are limited in the size of main computer memory in those days. Before I finish this answer, let me discuss the other method used to help match computer and tape drive speeds.

Buffering – It’s best if the computer never has to wait for a physical tape movement. One of the ways to do this is to “read ahead”, i.e. for the tape drive to be read and put one block of data into main memory while the computer is processing the data from the previous block. This was nearly always done, but it doubled the amount of main memory required.

Back to limitations – if one has a simple program that is just reading from one tape, performing some simple procedure, and writing the results to another tape, the program will require four buffers (two for each tape), each of which might be 800 characters long, for a total of 3200 characters of data. Early computers had limited computer memory – primarily because it was really expensive. An average computer might only have 32K of main memory. That means that even this simple program is using 10% of the memory just for data buffers, reducing the amount available for both the program processing the data as well as the operating system. If we made our blocks really big (the 8000 characters as mentioned above), our four buffers would consume all of available main memory, leaving none for the program and operating system!

Such were the limitations of even sequential tape processing in the early days.

Direct (Relative) Access

While there were some disk systems prior to the mid-1960s they were very expensive and not commonly used. For example, the computer that I learned on (a CDC 3600 – one of the early super-computers) had high-speed drum storage. This drum was 4’ long and about 1’ in diameter and had a head per track. However, the amount of storage on it was only 32K words (there were six bits per character and 36 characters per word, so it was a 192K computer in today’s measures) – the same as the main memory of the computer. It was only used for swapping out programs so that we could run more than one program. Certainly it was too expensive to “waste” for any data storage.

One of the first removable media disks were introduced by IBM in 1962, the IBM 1311. Two years later, co-incidental with the introduction of the IBM 360 line of computer, the 2311 was introduced. The 2314 was introduced on year later. The 2311 had a capacity of 7M per storage device, the 2314 had a capacity of 28M. These may seem pretty trivial these days, but that was a LOT of capacity back then.

One of the significant differences between tape and disk is that the disk is constantly spinning so no IRG is needed. However, you are limited in the size of one block of data to the amount of data that can be stored on a single track (there are 2000 tracks per unit in a 2311 so that would be 3.5K). But the second difference is that one was no longer limited to reading the tracks in sequential order, but you access them randomly. This gave rise to the first type of direct/relative access.

The term “relative” is used because each block of data was addressed relative to the first block of that file – for example, the 10th block is the same distance from the beginning regardless of whether the file occupies blocks 100-199 or 550-649. The question then is reduced to trying to determine which relative block a given record should be stored in.

Most real-world data is not nicely numbered from 1-N, so we need an algorithm to “map” the key of the data to the range of 1-N (actually 0 to N-1). If the data key is numeric, then a simple way might be to divide by the number of blocks and take the remainder (note that this is the origin of calling something a “customer number” even though many computer systems have customers identified by a key which may contain other than just digits – in the early days we really did use only numbers for ease of accesses like this one). I won’t go into any more detail, but there is a lot involved in doing that mapping in the best way possible.

But there was another problem – what if two records mapped to the same place? You can’t just overwrite what is there. The usual solution is to then start reading sequentially from that point and look for an available opening. So for example if our mapping was to take the last two digits of the key we would initially try to store customer 429 in block 29. But if customer 229 was already there, then we would look at block 30, then 31, etc. until we found an unoccupied block and put customer 429 there (say for example in block 32). In doing lookups we would follow the same type of logic, looking for customer 429 in block 29, and if not finding it there, then looking in subsequent blocks. Again, I won’t go into more detail here. But the complexities of using these kinds of files meant that they were not heavily used.

Indexed (Indexed-Sequential) Access

Although computing was taking off beginning in the early 1960s, most processing was still using sequential files. The COmmon Business Oriented Language (COBOL) was initially made available in 1960 and quickly became the most often used language to solve business problems. Indexed files did not appear in COBOL until the release of COBOL-68. Indexed files were implemented by having three separate areas of a disk for different purposes. The index area had one record for each block of data and listed the first and last key in the block as well as a pointer to it. The data area was where the blocks of information were stored. Besides the data there were a few other pointers, including one to the overflow area (if that block had overflown [more on this below], and the overflow consisted of unblocked records that could not fit in the block in the data area to which they would normally be assigned.

If records were deleted, then they were simply marked (most generally by putting all binary 1s in the first byte of the record). If records were added and there were no spaces available in the block to which they were assigned, then a record would be pushed into overflow. For example, if a block contained records with keys of 1, 2, 3, 5, 9 and a record with key of 7 was added, then the 7 record would be placed in the main block (1, 2, 3, 5, 7) and the record with a key of 9 would be placed into overflow and a pointer to it placed in the main data block. If another record with a key of 4 were added, the main data block would become (1, 2, 3, 4, 5), the record with key of 7 would go into overflow. The main block would have a pointer to the “7” record and the “7” record would have a pointer to the “9” record. Periodically, the indexed file would be “unloaded” into a sequential file and then reloaded so that all records would be in the main data area again.

Horror Stories

A Sequential File Nightmare – In the early 1970s I was working for Uniroyal at their then corporate headquarters in Oxford, CT. We had a situation where we had one tape for each month of the year and we wanted them merged so that if there were records for the same customer in multiple months that the information would be combined in a single record. The person assigned to write this program was a supposedly experienced programmer by the name of Al Love who had come to us from New York City. He attempted to write a program that did a 12-way match, i.e. 12 tapes in and one tape out. Our large mainframe at the time had 16 tape drives, but it normally ran four concurrent programs and had four drives assigned to each of the “partitions”. In order for Al to run his program, the operators had to wait until all the programs currently running had completed, then reassign all the tape drives to a single partition. They would then mount 12 tapes on the input drives and one blank tape on the output drive. Inevitably, Al’s program would crash, then the operators would have to dismount all 13 tapes, manually reassign the tape drives to the four partitions and then start running “normal” programs. The operators were NOT happy when they saw a test of Al’s program in their input queue.

After two months of trying to get his program to run, management got fed up with Al and he was terminated. They felt they had wasted two months of salary for an experienced programmer. They reassigned the job to a new hire. He knew that he was not skilled enough to write such a program, so he wrote a two-way match program (two tapes in, one out), and ran it 11 times, each time merging in one more month of data.

As I revisited Al’s problem later I realized that he was trying to identify and properly handle 2*12-1 possibilities (record in file 1 but no others, record in file 1 and 2 but not others, …) That was going to require 4095 IF statements with 12 parts or over 50,000 lines of code just to identify all the situations. No way his solution was ever going to work! There was another way that reduced the solution set to less than 20 lines of code for the central decision making IF statement (a 2500-fold reduction in code).

The Long-running Indexed File Update – A few years later I was working for the Winchester Rifle division of Olin Corporation at their division headquarters in New Haven, CT. One of my roles was overseeing our online system and I had written a number of utilities to help me in my work. One morning I came to work and was confronted by the operations staff who were very worried that an update program they had started running early the previous evening was still running. They wanted to know if they should cancel it or if it was going to end soon. I pulled out the program compilation for the program that was executing. It was a indexed file update. It seems that we had recently acquired a large number of customers from another company and the prior night we had included all the new customers into the customer-add-program.

One of my online utilities enabled me to enter a memory address and see what was in memory (while the program was running). Cross-partition reading was allowed (but cross-partition updating obviously not). Knowing the addresses for the start of each partition (we were running a version of the operating system known as MFT where the “F” stands for Fixed partitions), I looked at the compilation listing which showed where in the partition the various parts of the program and data would be stored – in particular where the file buffers were located.

I located the buffers using my utility and was able to determine the key of the record that was currently being processed. In my examination of the program it looked like the problem was that all these new customers had been assigned customer numbers in sequential order. Thus each new customer would land in the same block of the file, requiring a record to be pushed off into the unblocked overflow area. Since the file was sorted in ascending order, each new record would then require that the computer trace down the overflow chain for that block and add the new record to the end. The first record would require only 1 read of the overflow chain, but the 2nd would require 2 reads, the 3rd would require 3 reads, etc. The chain was growing longer with each new customer to be added. The program had run for 12 hours already and based on my analysis of how many customers had been added and how many there were to go, I estimated it would take another two days to complete. This was not an acceptable solution! So we cancelled the running program and restored the customer master file (always take backups before running an update!)

The solution was pretty simple – I just sorted the file of new customer in descending order instead of ascending order. Now for each add, the program would start tracing down the overflow chain, find that the first record on the chain had a higher key and simply drop the new record into the chain right there. When we restarted the update the estimated 60 hour job was completed in just 5 minutes. Knowing how indexed file operated saved the day!

Conclusion

In the forty years since the above incidents, file systems have gotten increasingly larger and more complicated. We now tend to use databases for many things and let the database software deal with all the details. But “under the covers” some of the same things mentioned above are going on, we just don’t have to deal with them.


I certainly learned a lot during my involvement in the early days of computing. And knowing those details often helped with my understanding of what was happening behind the scenes as technology advanced and got more complicated. But I’m glad to be retired and not having to deal with it any more.

Sunday, September 27, 2015

Recovering Deleted Files

There have been almost daily articles in the news about Hillary Clinton’s private server. Most recently there are articles about the efforts by the FBI to recover the deleted emails on that server. This article is not about any of the political ramifications or legal consequences that she may be facing. Rather, it is about my own story and how I performed this same sort of recovery process thirty years ago. Admittedly, the technology has changed greatly over the last thirty years and the FBI has tools for recovery that were not even dreamt about thirty years ago, but some of the principles are the same. So if you’re interested in this topic, please read on.

In order to keep this posting to a reasonable length, I’m going to refer to a number of topics that you can do further research on using Wikipedia. Specifically, feel free to read articles on these topics. I have marked each of them with an “*” the first time that they appear.
·         Savannah River Site
·         Turbo Pascal
·         Bernoulli Box
·         Norton Utilities

The Background

In the early 1980’s, Air Products had begun a new venture that involved labeling their cylinders with bar codes so they could track individual cylinders instead of just counting all the cylinders of a particular type. In addition to the bar code label, we had to develop software (a) that ran on a handheld scanner that could read the bar codes, and (b) that ran on a local standalone PC that could take the results of the scanning, keep it all in a rudimentary database and produce meaningful reports. The initial version of this was written by someone else, but when I took over the project I rewrote the PC software in Turbo Pascal*. It was a pretty robust system for the PC of the day (approximately 1984-1985). The database was stored on a Bernoulli Box* to provide ease of backup (essentially using the removable disks of the Bernoulli Box as removable hard drives (which did not yet exist).

Because of the success of this system, some of our customers asked if they could purchase/license the software for tracking the cylinders within their own facility. Accordingly, I made some customizable variations to the software so it could be used that way and we licensed it to a handful of customers (I believe perhaps the only software that our IT department ever licensed to someone else).

The Savannah River Site* (SRS) was one of these customers. Air Products delivered the cylinders to a central loading dock and the customer then delivered the cylinders to other locations within SRS (they had over 300 square miles of facility).

The Problem

One afternoon our local South Carolina office received a frantic phone call from SRS. They had, through a series of missteps, deleted the primary cylinder master file from their system. I won’t go through all the steps, but all they had remaining was an empty file. Since they relied on this master file to keep track of the several thousand cylinders in the facility, this was tragic to them. They asked if we could help them recover it. Working with our sales manager and with my own manager in IT, I thought that I could – even though I had never attempted anything like that before. But I trusted my instincts.

In record time, especially for a government facility, they prepared and got approval for, a purchase order that would pay for my expenses – the flight to/from South Carolina and my housing for two nights. I grabbed my trusty copy of Norton Utilities* and flew down. After staying overnight in a motel, the sales manager met me early in the morning and we drove to SRS.

It was a two-hour process to check in and gain admittance to SRS – including finger printing, background checks, etc. On the other side of the admissions building, the SRS manager met us and drove us to the building next to the central dock where the PC running our Cylinder Tracking System was located. I remember that he had a lanyard around his neck with over a dozen security badges hanging from it, as each area of the site required separate access badges and codes. We were only allowed access to the single building where we needed to do the work.

The Undelete Process

(Note that the following is a VERY simplified version – please don’t criticize the description.)

Files are stored on a disk by having a directory entry with a name (and several other attributes) and a pointer to the first sector on the disk that the file occupies. If a file is more than one sector long, there is a pointer at the end of the first sector to the next, etc, with the final sector marked with a “last sector” marker. In a process that is basically unchanged for the last thirty years, when you delete a file the PC cannot physically delete that portion of the disk. Rather, it just marks it as deleted and removes access to it. The directory entry is modified (by changing the first character to a “?”), and the sectors which the file occupied are returned to the “available sector pool”.

If all that had been done to this particular PC was that the file was deleted, then it’s a fairly easy process to located the marked directory entry, restore the first character of the name, and follow the list of linked sectors, unmarking the “available sector pool” as you go. But this was not the case for SRS. Instead, they had done further damage by creating a “new” file with the same name, but with no content. So I had to use the more advanced options.

This involved scanning the “deleted” sectors in the available sector pool, looking for any which contained data that looked like what I knew the database records were like, and noting the sector numbers and what records each contained (since the file needed to be in ascending key order when done). This took a few hours and when I finished I had a couple of sheets of paper with sector numbers and contents written down. I then went back through these sectors, ensuring that I had them all in the right order and that I had not skipped anything.

Now for the crucial “undelete” part. I first created a new directory entry just one sector long, then deleted it. This was going to be the root for my work. I then used the advanced options of Norton Utilities (which I had never done before). I first “undeleted” the new file, and then sector by sector using my notes, added these sectors to the end of the first, recreating the chain of sectors that I had marked on paper. With each addition I received a warning from Norton Utilities noting that I was adding more sectors than were in the original file – I just kept ignoring the warning. After another nearly an hour, I had successfully recreated the overwritten file. As I then scanned the newly created file, I could see that it wasn’t totally perfect – there were a few cases where a sector or two had been reused by something else during the original SRS procedure fiasco, but I figured I was 98% successful.

Consistency Checking

I was almost done. First I used some other Norton Utilities options to check the integrity of the disk, verifying that there were no file problems, that the list of sectors in all the files was the exact complement of the list of unused sectors, etc.

Then, I ran a series of integrity checks that I had built into the Cylinder Tracking System. Among other things, this verified the few missing records that I had not been able to recover. The records could be added based on other duplicate information elsewhere in the system, but the product in the cylinder and the actual cylinder serial number would be missing (something that would correct itself over time as that cylinder was refilled and scanned again).

I backed up the entire system onto a spare Bernoulli Box cartridge and turned the system back over to the SRS manager – to his absolute delight!

Lessons Learned

·         You can never protect yourself from user error, because you never know what a stupid user might do.
·         Having the right tools is essential, but they are only useful if you know how to use them.
·         Government security procedures are ridiculous (two hours to get in?)

<rant on> I said in the beginning that I was not going to comment on Hillary Clinton’s private server fiasco. But after spending the time to write all the above, I feel that I need to say something. I don’t know whether Hillary was really setting up her own server because she really felt that it was easier that dealing with government security procedures, or whether she was being malicious. But either way, she has made herself look like a “stupid user” and has jeopardized classified government information in the process. Her comments about “wiping” the server using a cloth just make her look even more stupid. I’m fairly certain that the FBI will find even more incriminating evidence on her server before they are done. So whether from maliciousness, stupidity, or just feeling that she doesn’t have to play by the rules, she continues to show herself unqualified to be the President of the US. I’m sure that she’s a smart person, but this time she’s out of her league. She may be able to avoid jail time, but she’ll likely end up as a convicted felon before this is all done. <end of rant>


Tuesday, September 15, 2015

Wolcott History – Finch Brook

In the website of the Wolcott Land Trust on the page regarding the Finch Brook Preserve (http://www.wolcottlandct.org/preserves/finch-brook-preserve/), there is a statement “About a half mile northeast of the modern-day preserve lived one “R. Finch”, after whom Finch Brook was perhaps named.” I decided that I would see if I could unravel this mystery. Here is what I have found.

In the early years of Wolcott, the Finch family was well represented. Daniel Finch (1719-1779) died as a soldier during the Revolutionary War. His wife, Jerusha, moved to Wolcott after his death and raised their family there. In particular, their sons Gideon (1743-1815) and Eleazer (1748-1830) were living in Wolcott in the early 1800s.

In the 1868 map of Wolcott there are three Finch families identified in various parts of Wolcott. These families are:
·         Benjamin Franklin Finch (1821-1904) lived in the western part of town. He was a grandson of Gideon.
·         Lucius Ransom Finch (1813-1898) lived near the center of town. He was a brother of Benjamin and a grandson of Gideon.
·         Ruth (Sarah) Finch (d. 1872) was the widow of William Finch (1787-1838). He was the son of Eleazer Finch. Their daughter (their only offspring) married Albert Hitchcock. In 1870 the Frisbie, Hitchcock, Todd, and Upson families lived in consecutive houses on Todd Road

There were other Finch family members living in Wolcott, but many of them were females who had married into other Wolcott families. For example, Chloe Finch, a sister to Benjamin and Lucius married Albin Alcott, the great-grandson of John Alcox who was one of the earliest residents of the town in 1731.

The Finch families were not as prolific in having children as some of the other early Wolcott families. Between early deaths (a common occurrence in those days), female offspring marrying into other families, and other family members moving away, by 1900 the only Finch family still living in town were Benjamin and his wife who were in their late 70s. There were no Finch families in town in the 1910 census.

Getting back to my reason for doing this research, Finch Brook is almost certainly named for William Finch who lived in that part of Wolcott. William moved to Wolcott with his family at a young age and married in 1813, so it was probably sometime in the 1810s that he and Ruth purchased a farm on Todd Road which included the property behind their farm on which the brook was located.

Of course, I cannot leave this subject without mentioning my connection to the Finch family. The grandfather of Gideon and Eleazer was Daniel Finch (1694-1766) who was from New Haven. Daniel is my great*6 grandfather. Thus, William was my 2nd cousin, 5 times removed and Benjamin and Lucius were my 3rd cousins, 4 times removed.



Monday, September 14, 2015

My Heart Attack

Yesterday I received my copy of the October 2015 Reader’s Digest. As is my practice, I generally read it cover to cover the day I receive it. One of the articles was titled, “The Race To Beat Your Heart Attack.” I was especially interested in this article as I am a heart attack survivor. And I found the article to be very pertinent to my own experience.

I wrote about my experience in my autobiography, “My Life,” but most of the people viewing this on my blog will not have read it. So although the basis of what follows is that book, I think it’s worth repeating. If it even helps one person survive or even manage a heart attack situation in the future, then it will have been worth the retelling.

Here are a few quotes from the Reader’s Digest article:

“In recent years hospitals have slashed the time it takes to clear a blockage in a patient’s arteries. Often it took more than two hours for blood to flow to a patient’s heart again. Now nearly all hospitals treat at least half their patients in 61 minutes or less. From 2003 to 2013, the death rate from coronary heart disease fell about 38 percent…”

“In November 2006, the American College of Cardiology announced a national campaign…”

“They looked for opportunities to shave a minute or two here and there. They had paramedics do an electrocardiogram and transmit it directly to the emergency room”

Since my heart attack occurred in 2005, it was before most of these changes took place in many hospitals. However, the hospital where I was taken was already beginning to experiment with these changes and so I am a survivor. Here is my story.

Heart attack 

In January of 2005 I had a major medical incident.  It was a Sunday afternoon.  My wife Donna and I had gone shopping after church and were at the Trexlertown Plaza.  We were in Kohl’s.  As is often the case on a Sunday afternoon, I was getting tired and ready for my Sunday afternoon nap – that’s usually the only day I could take one.  So I went out to the car to nap while she finished shopping.

I awoke perhaps a half-hour later, but wasn’t feeling quite right.  I thought that perhaps I just needed to wake up a bit more, so lay there quietly.  But I wasn’t starting to feel better, in fact I was starting to feel worse and was starting to feel pain in my left arm and the left side of my chest.  After considering this for perhaps a minute – all the while the pain worsening – I came to the conclusion that these were classic symptoms of a heart attack.  Rather than try to find Donna, I pulled out my cell phone and dialed 911.  I told the operator that I thought I was having a heart attack, told her where I was located in the parking lot, and said, “Come and get me.”  I also told her that my wife was in Kohl’s.  Before the ambulance arrived, Donna also came back from shopping.  I was told later that the 911 operator had called Kohl’s and they had paged Donna, but she was already leaving the building by that time.

I should mention here that my medical exposure for heart attacks was based on the TV show “Emergency!” which aired from 1972 until 1977. But the scenarios from that TV show were about to be played out in real life for me. I don’t recall anyone one the show calling 911 on themselves. But I wasn’t going to wait for someone else when I knew I needed help – fast! I was also about to experience things like “diaphoretic,” a term which I had often heard on the show and had never bothered to check out the meaning of. When the EMTs applied it to me, I recognized it, and when I checked it out later I found it meant that I was sweating profusely even though I was not overheated. But at the time, I was dealing with this emergency both physically and mentally and a lot of what was going on was just being automatically filed away in my mind for later processing.

By the time the ambulance arrived – only a few minutes later – I was no longer able to exit the car on my own.  The EMTs took me out of the car, put me on a stretcher, and loaded me into the back of the ambulance and told Donna that she could meet them at the ER.  They removed some of my clothing (just use sharp scissors – I sacrificed my shirt, pants, and even my belt that afternoon – a small price to pay!), and putting a couple of leads on me, they connected me to an EKG machine.  Meanwhile they first gave me a couple of baby aspirin, then a couple of nitrate pills to hold under my tongue and asked if that helped – it didn’t.  They had a new technology in the ambulance at the time – being able to send the EKG results directly to the ER via the EMT’s cell phone.  Thus, when I got there they were already waiting for me, they had confirmed that I was indeed having a heart attack.  In fact, they were already preparing the cath lab for me – just had to clean up after the prior patient and put me next on the schedule.

They gave me a shot of morphine, but it really didn’t do much (when they asked how much pain I was feeling on a 1-10 scale, I said, “I’ve had a kidney stone, so on that scale this is only an 8.”)  Shortly after Donna arrived, they wheeled me out the door to the cath lab.  After swabbing me down with what seemed like a gallon of disinfectant (colors the skin yellow), they made an incision in the vein in my groin and threaded the catheter up through my heart, out the aorta, and down the coronary artery.  (I was not paying much attention at the time, just trying to deal with the pain – this is all what they told me later.)  They confirmed that I had 100% blockage of the LAD (Left Anterior Descending artery) and 75-80% blockage of the Circumflex artery.  They dealt with only the former at this time.  The surgeon put a wire mesh stent on the end of the catheter and pushed it through the blockage, then inflated the balloon on the end of the catheter which expanded the mesh and opened up the artery (it compresses the fatty deposit that is the blockage against the artery wall).  The pain immediately disappeared!

They then closed up the incision and sent me upstairs to the cardiac care unit (CCU) for the next few days for observation.  Total time from onset of initial symptoms to opening up the blockage was less than an hour.  It was this quick reaction that probably saved my life.  I’ve read that the survival rate from this type of blockage is about 7%.  That’s why it’s known as the “widow maker”.  I am not only part of that 7%, but have zero long-term issues.

You can see how my experience so much mirrors the Reader’s Digest article. Even though this was before the 2006 campaign by the American College of Cardiology, I was able to experience having the EMTs take the EKG, the quick response team in the ER, etc. I only knew that it was new at the time because while I was in the CCU someone from the hospital asked me to sign a form to allow them to use my case as training material since those kinds of things were still experimental.

While I was in the CCU that afternoon there was one complication. They cut into a major blood vessel in your groin to insert the catheter, the stent, etc. Afterwards they stitch it closed, then they put a heavy bag of sand over the incision point – both to immobilize you and to keep the incision point stable. But even with those precautions, sometimes the blood vessel still “leaks.” It did in my case and I started getting a large bruise as the blood was flowing under my skin. The nurses recognized this immediately. The “treatment” is to express the area and stop the blood from pooling around the incision point – giving you a larger “bruise” but that will eventually go away. With men they also have to try to move the blood away from the scrotum which is only a few inches away. Otherwise the scrotum can fill with blood and it will get swollen and quite painful. So, like many medical procedures, you just have to put aside any ideas of modesty as the nurse massages the area to spread the blood around until the blood vessel stops leaking. I had a large discolored area when it was all over, but that was a small price to pay.

On Wednesday I made my second trip downstairs to the cath lab to get the other blockage cleared (again via a stent).  This time I was wide awake and very aware of what was going on.  The doctor has three monitors.  One displays my vital symptoms, one is a real-time x-ray of the area around the heart so he can see where the catheter is, etc, and the final one is like the second but can be “frozen”.  When he is nearing the area where the blockage is, he can squirt a bit of dye out the end of the catheter that gives contrast of the arteries and “lights up” the blood flow – he then freezes that image on the third monitor.  He can then guide the catheter to that exact spot before expanding the stent.  Amazing technology!  By stretching my neck and looking around the large x-ray machine that was over my chest, I could see all three monitors and know what was going on.  The spatial exercise of looking at the picture on the monitor and translating that to where inside my chest and heart the catheter was actually located was interesting.

On Thursday I was released from the hospital.  The next Sunday I was back in church, like I’d never gone through this, and people were amazed.  Of course I couldn’t drive for a couple of weeks and was out of work for about six weeks, but that was not much compared to the second chance on life that I’d been given.

I’d always wondered how strong my faith in God really was – if I was being threatened with my life, would I deny him.  Now I know.  While all the events – from initial symptoms to the surgery – were happening and it was a good possibility that I might not survive, I was absolutely at peace.  There was no panic, no prayers of desperation, none of that.  I was confident in my relationship with God and the knowledge that no matter what happened that I would be seeing Him whenever my life here on earth would end.  Now, ten years later as I write this, I still have that absolute assurance that He is with me and I am with Him.



Thursday, September 10, 2015

Wolcott History – Rivers and Brooks

My blog about Lakes and Ponds was quite well received, so I thought I’d do another about the bodies of water that tie all the lakes and ponds together.

Most of Wolcott is part of the Mad River watershed, i.e. the Mad River which flows out of town into Waterbury takes most of the rainfall from the town. It’s only the fact that the town boundaries are straight lines that prevents them from matching the limits of that watershed on three sides of the town. In the north, Cedar Lake extends into Bristol, in the west Welton Pond extends into Waterbury, and in the northwest the area around Allentown Road is part of the Hancock brook watershed.

Here are the same list of lakes and ponds from the prior posting, but showing how they are connected by various rivers, brooks, and streams. I have put the lake/pond names in bold and the river/brook/stream names underlined.

Primary Mad River

Russell’s Pond, unnamed stream, Cedar Lake, Mad River, Mad River mill ponds, Mad River, Scovill Reservoir, Cornelis Pond, Mad River


Secondary feeds into Mad River

Welton Pond, Col. Richard’s Brook (Old Tannery Brook), Chestnut Hill Reservoir, Old Tannery Brook, Lions Club Pond, Old Tannery Brook, Mad River

Hitchcock Lake(s), Hitchcock Lake Brook, Teriaults Ice Pond, Lily Brook, Todd Road unnamed pond, Lily Brook, Finch Brook, Mad River

Wolcott Sports Complex unnamed pond, unnamed stream, Mad River

Clintons Pond, unnamed stream, Mad River

Scovill Road unnamed ponds, unnamed stream, Mad River


Tertiary feeds

Evers pond and unnamed pond, unnamed stream, Lindley Brook, Scovill Reservoir

Grilley Road unnamed pond, unnamed stream, Old Tannery Brook


There is however, one part of Wolcott that is not part of the Mad River watershed – the eastern slice of town. If you were to draw a [nearly straight] line that began at the top of the hill behind the Russell Preserve (1000+ feet in elevation), down across the western end of Long Swamp Road, joining Woodtick Road near Alcott School, then down Woodtick Road as far as Center Street, then on down County Road and continuing on to East Street, you would chop off that part of town that is not part of the Mad River watershed.

In this eastern part of town the water flows either north or east, not to the southwest as does the Mad River. Here are the lakes and ponds in this part of town.

Dunham Mill Pond, unnamed brook to the north into Bristol, Pequabuck River

Bristol Fish and Game Club Pond, Cussgutter Brook to the east into Southington

Roaring Brook unnamed pond, Roaring Brook, New Britain Reservoir, Roaring Brook (North Branch Hamlin Brook) to the east into Southington

Southington Reservoir No. 2, Humiston Brook to the east into Southington


The Mad River eventually empties into the Naugatuck River which feeds into the Housatonic River. All the brooks in Southington are part of the Quinnipiac River watershed which flows south toward New Haven. The Pequabuck River in Bristol flows northeast and is part of the Connecticut River watershed.

Thus the north eastern corner of Wolcott is actually the dividing spot between three different watersheds, each one of which empties into the Long Island Sound in a different spot (Bridgeport, New Haven and Old Saybrook). I believe that makes Wolcott unique as a Connecticut town!



Saturday, September 5, 2015

Genealogy Story – Erskine Russell

I’ve written blogs about my father, Vernon Russell (see http://ramblinrussells.blogspot.com/2015/02/genealogy-story-nomadic-life-of-vernon.html), and some about my great-grandfather, Louis Russell (see http://ramblinrussells.blogspot.com/2015/03/genealogy-story-william-merchant-russell.html), but not about my grandfather, Erskine Russell. So I thought it was time to rectify this oversight.

Erskine was born on 12 Sep 1894 in Sherman, Connecticut, to Louis and Anna Pauline Russell. He was their first of six children. In 1903, when Erskine was only 9, Anna died. The three younger children were sent to live with relatives, but Erskine and the two other older children remained with their father. After finishing 8th grade, Erskine dropped out of school and began working as a farm laborer. In 1910, when he was 16, his father re-married. The family lived in New Milford.

In 1914, Erskine married a young lady, Vera Levy, who also lived in New Milford with her mother and sister. Vera’s father had been Jewish and she grew up in Brooklyn, NY with all her Jewish relatives. But when her father had died in 1910, her mother, who was not Jewish, left the Jewish community and moved to New Milford to be closer to her own family. As it was 1914 and the start of WWI, Erskine and Vera moved to a larger city, Bridgeport, and Erskine began working as a foundryman in a factory there. They had two children, Dorothy, born in 1916, and Vernon, born in 1920.

Meanwhile, Erskine’s father had also left New Milford. After a short time with the New England Lime and Cement Company, he worked for the Tucker Electric Construction Company and helped build the new Scovill Main Plant Power Station in Waterbury. When the power station was completed in 1918, he began working as an employee for Scovill as the operator of the big control board in the power station (a position he held for the next 27 years). For Erskine, things were not going well between him and Vera, and in 1922 he abandoned his family and moved to Waterbury, CT where he began living with his father and step-mother. His father got him a job working for Scovill – in the power station with his father as his supervisor.

After a few years of relative stability, Erskine and Vera decided to try to get back together and Vera moved to Waterbury where the family rented a house a few blocks from Erskine’s father and step-mother. They tried that for two years (mid-1926 to mid-1928), but it did not work out any better than before. They divorced – Vera moved back to Bridgeport with the children and Erskine, now age 34, moved back home with his father and step-mother. Erskine would not see his children again for 9+ years.

In 1933, Erskine married a second time, to Elizabeth Evans. Thus he was finally able to move out from his father’s home for the last time. Elizabeth had been born in Sheffield, England. Like Vera, her father had died when she was fairly young and she stopped schooling after 9th grade to begin working as a domestic servant and dressmaker. She had immigrated to the US in 1923 at the age of 38 to begin a new life with her uncle and aunt who lived in Waterbury. She arrived in the US with $60 and the promise of a place to live. When she married Erskine she was a 48-year old spinster. But Erskine, then 39, was not looking for a love match, he wanted someone to take care of him other than his father and step-mother who were then in their 60s.

In 1937, Erskine’s children, Dorothy and Vernon, also moved to Waterbury – Dorothy to a new job in the city, and Vernon to complete high school. They both lived with their grandfather, Erskine’s father, so even though Erskine was living on the other side of the city, he could finally see them again. The following year Dorothy married – to a man living only a block away, and Vernon graduated from high school and began working at Scovill (jobs were scarce, Scovill was only a few blocks away, and the company had a practice of hiring children and relatives). However he did not work in the power house with his father and grandfather, but in the drafting department.

In 1944, Vernon was drafted and went to war with the US Navy in the South Pacific. While he was away for two years much happened. In April of 1945, Erskine’s step-mother passed away. That fall his father, Louis, then age 74, retired from Scovill (after 27 years) and he passed away just a few months later. Erskine, finally freed from working under the supervision of his father, also left employment at Scovill (after 23 years) and began working as a security guard for Pinkerton – a job he held for the rest of his working life.

For the next few decades, things settled down and life moved on. Both of his children, Dorothy and Vernon, were married and had families. So Erskine and Elizabeth had a good time interacting with their grandchildren. And with grandchildren in common, Erskine even got to see his first wife, Vera, and her second husband from time to time.

In 1963, after several years of living in various mental institutions, Erskine’s first wife, Vera, died. Her second husband had passed away at the age of 93 a few years earlier. In January 1970, at the age of 75, Erskine passed away. He and Elizabeth had been married for 37 years. She died in August of that same year at the age of 85.


As I re-read the above, it is pretty factual and does not seem to have much emotion. But that is pretty representative of the relationship that I had with my grandfather. With my mother’s parents we had a lot of interaction. It was not uncommon that we would be left with them – either for a more planned event such as a Christmas party, or for the times that my mother was having another baby and we older children stayed with them for a few days. But visits with my father’s parents were strictly limited to formal visits. Generally the adults would have a short visit in their living room and the children were not included. My father might take us for a walk down into the gully in the woods behind their house to see the stream which flowed through the area, but I don’t recall that my grandfather ever accompanied us. He was only four years older than my mother’s father, but with Grampy Pierpont we would take walks, he even took me and my cousin Dave for a camping/hiking trip. But I can’t envision Grandpa Russell doing any of those kinds of things.

Since I never had the opportunity to know him as an adult (he died when I was in college), I’m not sure how much of it was due to both he and Vera having second marriages to people who were so much older (e.g. when I was 6, Grampy and Grammy Pierpont were each 56 but Grampa and Nana Russell were 60 and 69 and Grandma and Bampa Rogers were 59 and 89). Or maybe it was due to the somewhat unusual relationship that he had with his own father with whom he either lived or worked directly under until he was over 50 years old.


Genealogy Story – Grandparent Cousins

Since my father, grandfather, and great-grandfather all worked for Scovill Manufacturing Company in Waterbury, one of the Facebook groups that I belong to is the Scovill Bulletins Ancestors Network. Copies of most of the old employee newsletter (the Bulletin) are available online to peruse. Some of those that were until recently missing from the collection were the war years (1944-1945), but the group administrator acquired copies of them and has started scanning all the individual articles and posting picture collections in the group.

In mid-1944, the president of the company completed 50 years of service and “retired” by stepping into the Chairman of the Board position. There was an article about the new president, Leavenworth Porter Sperry. As I viewed his name and picture, the thought came to me that I had people with the name Sperry in my family tree and that with such an unusual name as Leavenworth, he should be fairly easy to trace and see if I was related to him. I signed on to Ancestry.com and quickly found a family tree for the Sperry family and his family line as follows:

Richard Sperry (1605-1698) b England, d New Haven CT
Richard Sperry (1652-1734) New Haven CT
James Sperry (1693-1775) New Haven CT
James Sperry (1718-1789) New Haven CT
Timothy Sperry (1747-1836) New Haven CT
Hezekiah Sperry (1776-1826) Burlington CT
Corydon Stillman Sperry (1810-1856) Waterbury CT
Mark Leavenworth Sperry (1842-1926) Waterbury CT
Leavenworth Porter Sperry (1883-1958) Waterbury CT

Thus, Richard Sperry (1605) is the great*6 grandfather of Leavenworth Sperry.

I then went to my own family tree to see if Richard Sperry was in it and what my connection was to him (it’s been a few years since I was actively working in that part of my family tree). I quickly found Richard (1605), thereby confirming that I was related to him. However, curiously I had documented three of his family lines through his sons Richard (1652), John (1649), and Daniel (1665). (Richard (1605) had a very large family – these were three of over a dozen children.) So in tracing each of these lines I found the following connections to myself.

Richard Sperry (1605-1698)
Richard Sperry (1652-1734) New Haven CT
John Sperry (1683-1754) New Haven CT
Desire [Sperry] Wooding (1732-1812) New Haven CT
Huldah [Wooding] Perkins (1763-1797) Bethany CT
Anna [Perkins] Merrill (1792-1881) Bethany CT
Nathan Merrill (1823-1909) Waterbury CT
Annie [Merrill] Pierpont (1858-1898) Waterbury CT
Harold Pierpont (1898-1969) Waterbury CT [his mother died after giving birth]
Sylvia [Pierpont] Russell (1924-2012) Waterbury CT
Myself [and making Richard (1605) my great*8 grandfather]

Richard Sperry (1605-1698)
Daniel Sperry (1665-1750) New Haven CT
Abel Sperry (1700-1776) New Haven CT
Joseph Sperry (1737-1801) Wallingford CT
Moses Sperry (1765-1808) New Haven CT
Anna [Sperry] Talmadge (1800-1888) Cheshire CT
Stephen Talmadge (1843-1924) Prospect CT
Alice [Talmadge] Blackman (1870-1929) Prospect CT
Sara [Blackman] Pierpont (1898-1979) Prospect CT
Sylvia [Pierpont] Russell (1924-2012) Waterbury CT
Myself [and making Richard (1605) my great*8 grandfather]

Richard Sperry (1605-1698)
Richard Sperry (1652-1734) New Haven CT
Moses Sperry (1681-1754) New Haven CT
Anna [Sperry] Humiston (1711-1781) New Haven CT
Anna [Humiston] Sperry (1740-1814) New Haven CT
Moses Sperry (1765-1808) New Haven CT
            And continuing as above
Anna [Sperry] Talmadge (1800-1888) Cheshire CT
Stephen Talmadge (1843-1924) Prospect CT
Alice [Talmadge] Blackman (1870-1929) Prospect CT
Sara [Blackman] Pierpont (1898-1979) Prospect CT
Sylvia [Pierpont] Russell (1924-2012) Waterbury CT
Myself [and making Richard (1605) my great*9 grandfather]

Richard Sperry (1605-1698)
John Sperry (1649-1692) New Haven CT
Elizabeth [Sperry] Hotchkiss (1683-1760) New Haven CT
Gideon Hotchkiss (1716-1807) Wallingford CT
Jesse Hotchkiss (1738-1776) Waterbury CT
Charity [Hotchkiss] Russell (1761-1851) Waterbury CT
Mary [Russell] Sperry (1786-1857) Wallingford CT
Anna [Sperry] Talmadge (1800-1888) Cheshire CT
           And continuing as above
Stephen Talmadge (1843-1924) Prospect CT
Alice [Talmadge] Blackman (1870-1929) Prospect CT
Sara [Blackman] Pierpont (1898-1979) Prospect CT
Sylvia [Pierpont] Russell (1924-2012) Waterbury CT
Myself [and making Richard (1605) my great*10 grandfather]

Thus it appears that not only am I related to Leavenworth Sperry, but I am related in several different ways because of intermarriages of the various descendent lines. In particular:

(1)   When Joseph Sperry (1737) married Anna Humiston (1740), he was marrying the daughter of his second cousin (i.e. his 2nd cousin once removed). A legal marriage, but still pretty close as family members go.

(2)   When Moses Sperry (1765) married Mary Russell (1786), he was marrying his 3rd cousin twice removed.

(3)   When my grandfather Harold Pierpont married my grandmother Sara Blackman, he was marrying his 7th cousin. It’s unlikely that they knew this as the Sperry family name were pretty far back in my grandfather’s family tree.


Oh the complicated web we weave!