Leaving the Myth business to the ancient Greeks and Romans: Cobol is dead … seriously? Part III
By Dave Dischiave
In this third installment in our series, “Cobol is dead…seriously,” we will explore the language of COBOL in greater detail, looking at how it was first developed and examining its design goals.
Cobol was developed in 1959 by a committee consisting of researchers from private industry, academia and government and was heavily influenced by the work of Commander Grace Murray Hopper, considered the mother of Cobol. I will share more about Commander Hopper in future blogs describing my encounters with her when I was a computer science undergraduate student. So more about her and me later, so stay tuned.
Commander Grace Hopper, mother of Cobol.
Image Source: http://cecomhistorian.armylive.dodlive.mil/
The Cobol language development team had a number of design goals: 1) designed to be vendor neutral; such that no one hardware or software company should have control over the language development, so governance of the language was entrusted to the American National Standards Institute (ANSI) 2) designed to be easy to write and easy to read, even by non-programmers, so the committee chose English words to represent Cobol commands 3) designed to provide powerful input/output operations making it easy and efficient to process record-oriented data and 4) designed to run in any computing environment; that is, any computer vendor was allowed to provide a Cobol compiler. Or as we say in the enterprise world, Cobol was one of the many programming languages that made up the vendor’s language environment.
Cobol started its life as a procedural language but evolved into both an object-oriented and procedural language. The language syntax were modeled after English words. These words are referred to as “reserved” words. When Cobol instructions are assembled to perform a task the resulting code is known as a “program.” Cobol programs are compiled so that the resulting executable code (called load modules in enterprise class systems and binaries in smaller computer environments) will run native to the operating environment. In other words the program executable will directly run under operating system control. No other operating system enablers (think run time monitors here) are needed. This is especially important when performance is one of your application design goals. To summarize, Cobol is an English-like language designed for business applications, it is easy to learn, easy to read by a non-programmer, used for high performance applications and can run in multiple vendor computing environments. So if Cobol can do all of this; why are organizations abandoning it? Or are they as the Mitchell’s article suggests? Does this sound vaguely familiar to the Y2K frenzy in the late 1990’s?
Let’s explore Cobol a little deeper. Cobol requires a compiler to translate and prepare the code to run in any computing environment. Computer and software vendors that produced Cobol compilers took the liberty to vary from the American National Standards Institute (ANSI) standards to include non-standard instructions, words, phrases or functionality to differentiate their flavor of Cobol from the others in the marketplace. These language modifications are referred to as “vendor-specific extensions.” Think of these extensions as the equivalent of adding slang words or colloquial expressions to English. So one vendor’s variant (or dialect) of Cobol could be and often was different from another vendor’s. While vendor extensions typically provided some coding efficiencies, the downside consequence was the programs no longer had the portability the original designers of the language envisioned. For example, programs using a Hewlett Packard compiler produced code that wouldn’t run in an IBM environment without both modifying then recompiling the source code.
In the next installment in our series, “Cobol is dead…seriously,” we will illustrate with code samples that maintainability is less about language code constructs and more a function of technique. We will illustrate our position with code examples so that you can draw your own conclusion. Stay tuned.
Tell us your thoughts in the comments section!
Dave Dischiave is a professor at Syracuse University’s School of Information Studies and the Director of the Global Enterprise Technologies Programs. His areas of interest are in large scale systems development and integration and the development and use of large data structures.
blog comments powered by Return to Previous Page