After university I started out as a graduate trainee developer way back in ’88 at Royal Insurance (now RSA) near to home in Liverpool, UK.
The training programme was developed in conjunction with the British Computer Society (BCS) and took an average of 3.5 years to complete. Out of thousands of applicants 25 were lucky enough to be chosen each year.
The technology stack was mostly IBM mainframe, JCL, IMS and Cobol. There were no PCs, no email, no internet and no mobile phones at the time. Just those clunky old green screen terminals and MVS.
After a year or two wrestling with Cobol, my boss called me over as it was time to change teams. This happened every 6 months as part of our training programme so we could gain experience across all areas of the IT department.
My new team had developed an analytic system that contained all of the policies and claims relating to Royal’s private car business. The system was called ‘Private Car Information System’ (PCIS).
PCIS was refreshed annually (I kid you not) from the policy and claims databases that were hosted on the IBM mainframes. From the business user’s perspective, interactive access to data up to a year old was still miles better than the fixed operational reports from the mainframe.
Several graduate trainees were drafted into the ‘Decision Support’ team to embark upon the development of the ‘Household Information System’ (HIS) which contained the household policy & claims data.
First things first, we needed some training. We were going to be using the new fangled ‘Teradata’ system apparently. Not only that, we were to build user interfaces using the Nomad 4GL and a natural language interface. All very cutting edge stuff.
Our Teradata training took place on-prem in Liverpool. We covered topics such as data loading, creating tables and views (DDL) and running insert/select/update/delete queries (DML).
SQL was edited on the mainframe and mainly submitted to Teradata via BTEQ using JCL. Alternately, queries could be entered in interactive mode by running ITEQ.
Although ITEQ avoided having to edit JCL, submit a batch job and wait forlornly as it climbed up the job queue, the downside was that ITEQ couldn’t run on a split screen. No flipping back and forth with F9. Multi-tasking firmly denied.
As very early Teradata adopters, I wasn’t at all surprised that a chap from the US had come over to provide our training. He said he volunteered as he was due over on holiday anyway.
Our Teradata training course was excellent, but that’s hardly a surprise given that it was run by none other than Brian Marshall. That’s *the* Brian Marshall of Teradata hardback books fame.
For many a year Brian’s books were the only source of Teradata information available outside of the official training materials and technical manuals:
Here they are on Amazon:
I still have my original Teradata ‘cheat sheets’ from the training course, plus one of Brian’s hardback books:
Teradata Reference Cards & Brian Marshall’s Teradata Performance Book.
The thing that first struck me about SQL was how so little code could produce such useful results. Also, given that we had a Teradata system with hardly any users, it was also pretty quick.
With a few well chosen BTEQ ‘dot’ commands it was also easy to produce basic formatted reports with things like page headings, column titles, page breaks, footers etc.
All in all, this was a walk in the park compared to running Cobol on the mainframe against hierarchical IMS databases just to answer a simple question. No way was I ever going back to Cobol if I could help it.
Once the training was over we got to work on the new HIS system. Despite this being the late ’80s/early ’90s, we were already very much in the ‘ELT’ style ETL camp. Or, to be precise, ELTL:
- Extract from source
- Load to DW staging
- Transform via DW work tables
- Load to target DW schema
The ‘E’ part of the process was particularly painful. Most of the data was in 20 policy partitions and 10 claims partitions in an IMS database. IMS is hierarchical and designed for fast OLTP, not for being easily scanned with Cobol to look for deltas.
Even worse, there were two mainframes to provide failover. We had to unload data from both, so that’s 40 policy and 20 claims partitions.
Once the data landed in Teradata the ‘E’ and ‘L’ parts of the ‘ELTL’ process were complete. The next stage was to check, clean & join the policy & claims data together. This is the transform (‘T’) stage of the process and was carried out via BTEQ scripts.
Teradata SQL Abbreviations – SEL and DEL Because SELECT & DELETE Just Won’t Do
The final stage was to load from work tables to the target tables – the second ‘L’ in ‘ELTL’. The target tables are the ones with names like ‘customer’, ‘policy’ and ‘claim’ that the business users would query.
The next couple of years were spent running the quarterly refresh of HIS and answering user questions. We couldn’t refresh HIS more frequently as it took 3 months to fill in all the forms to send to ops to run the jobs and for the jobs to actually complete.
When on out-of-hours support I carried a pager and a schedule diagram that consisted of about 9 pages of A4 taped together and folded up.
Ah, pager duty! Many a time I had to leave the pub, go to a BT phone box, call the ops, unfold the schedule diagram, figure out what the failure messages meant (which were conveyed verbally) and advise them what to do with the schedule.
Our stuff could only run out of hours as we were competing the precious mainframe batch processing slots. We simply couldn’t ‘leave it until the morning’ or we’d miss a whole 24 hour processing window.
The time I spent supporting & developing the HIS ETL process were how I got my Teradata stripes. It put me in such a good place with regards to marketable skills that I set off for California in ‘92 after ‘The Royal’.
But that’s a whole other story…

