Given the intellectual rigors of their craft, programmers of the day were understandably disbelieving, even disdainful, of a programming language doing their jobs. They had to be conversant in the machines tongue, in binary. For a flavor of the simplest numeric translation, 1 is 01 in binary, 2 is 10, and 3 is 11. Then 4 is 100, because it is 2 squared, requiring that a digit be added in the third column. The columns, moving to the left, are powers, or multiples, of 2 so 1000 is 8, which is 2 cubed (2 ? 2 ? 2). And 256 is 100000000, or 2 to the eighth power. The binary system of 1s and 0s is, at first, perplexing to humans, somehow unnatural. But, in part, that is because we are so accustomed to the number system based on 10, with the number columns being powers of 10. The base-10 system called decimal also feels comfortable because it corresponds to the natural human counting tool of our 10 fingers our digits.
An early tool to simplify things for programmers staring at a blizzard of binary was the use of octal notation. Octal is a base-8 number system, which uses eight symbols (0, 1, 2, 3, 4, 5, 6 and 7), and its columns moving left were powers of eight. Octal was used because it was relatively easy for humans to read certainly easier than binary and yet could be easily translated into the binary format of the machine because, again, eight is a power of two. For early programmers, octal became second nature. We used to joke that we did our checkbooks in octal, said Lois Haibt, a member of the FORTRAN team. There was even octal humor. Why cant programmers tell the difference between Christmas and New Years Eve? Because 25 in decimal is 31 in octal. (That is, 31 in octal, or 3 ? 8 plus 1, is 25 in decimal.) Since the 1960s, as computers and software became larger and more complicated, programmers have typically used a base-16 system, called hexadecimal, as a shorthand for binary when they really have to understand things at the machine level. In hexadecimal, the symbols used are 0 through 9 and A through F.
Use of Machine Language Assembler ProgramsThe next step in trying to make the programming process less arduous was the development of assembler programs. These allowed programmers to write instructions using mnemonic abbreviations perhaps LD for load or MPY for multiply, followed by a number to designate a location in the computers memory. A small assembler program then translated, or assembled, these symbolic programming instructions into binary so the machine could execute those instructions. The symbolic shorthand the blend of abbreviations and numbers was called an assembly language. Each different kind of computer had its own assembly language, as if each machine environment were a medieval fiefdom with its own dialect. Still, the assembly languages with their assembler programs were an essential step on the way toward higher-level languages like FORTRAN and its compiler.
The assembler was pioneered in England, where the first working stored-program computer went into operation, Cambridge Universitys EDSAC. The programming innovation in Cambridge was inspired by the same thinking that would motivate the FORTRAN team and generations of software developers afterwards. The objective from the very early days was to make it easy to use for people without specialized training, recalled David Wheeler, who was a 21-year-old researcher when he joined the Cambridge group in the fall of 1948. Wheeler wrote the assembler program for the EDSAC, which he called Initial Orders, an artful and elegant 30 lines of instructions. The Initial Orders program would translate into binary the instructions written in a simple assembly language. A single line of instruction to tell the computer to add the number in memory location 123 into the accumulator would appear:
A 123 F
The Cambridge group described their work in the first programming textbook, The Preparation of Programs for an Electronic Digital Computer, published in 1951. The authors Maurice Wilkes, David Wheeler, and Stanley Gill chose to have the book published first in the United States, where there was a larger computing community and their work might have the most impact. The book also described the use of subroutines segments of programs that are frequently used, so they can be kept in libraries and reused as needed in many software applications. The Cambridge group thus introduced the concept of reusable code, which remains today one of the principal tools for reducing software bugs and improving the productivity of programmers.