The evolution of virtualisation
POSTED BY APPANNA GANAPATHY
Although virtualisation often sounds like something out of a science fiction film, it has actually been around for a very long time. Decades, even.
The concept is generally believed to have originated in the late 1960s or early 1970s, when IBM began developing time-sharing solutions. Time-sharing refers to the shared use of computer resources among a large group of users. This aimed to increase the efficiency of the group of end users as well as increase the shared computer resources. At the time, it was a major breakthrough – suddenly people could use computers without having to own one. The cost of providing computing capability also dropped.
Initially, each generation of IBM’s systems differed vastly from the previous, which made it difficult for customers to keep up with the changes and requirements of each new system. These computers could only do one thing at a time, which wasn’t a huge issue as most customers were in the scientific community and batch processing suited their needs.
IBM then began working on the S/360 mainframe system to meet the wide range of hardware requirements. Meant to replace most of their other systems, it was designed to maintain backwards compatibility. Initially it was designed as a single-user system to run batch jobs.
The focus changed in 1963, when the Massachusetts Institute of Technology (MIT) announced Project MAC – initially short for Mathematics and Computation but eventually renamed to Multiple Access Computer. The project was funded by a US$2 million grant from Darpa and was aimed at research into computers, especially the areas of operating systems, artificial intelligence and computational theory.
The project required computer hardware capable of being used by more than one simultaneous user – a timesharing computer. MIT sought proposals but at that time, IBM was not willing to commit to a timesharing computer as they felt the demand was not big enough, but MIT did not want to use a specially modified system. GE was chosen as vendor of choice for the project.
This was a wake-up call for IBM and they began to take notice of the demand for such a timesharing system, especially Bell Labs’ need for a system that closely mirrored that of MIT. They designed the CP-40 main frame, which was never sold commercially and only used in laboratories.
The CP-40 evolved into the CP-67 system, which became the first commercial mainframe to support virtualisation. The operating system was called CP/CMS, consisting of a small single-user operating system designed to be interactive, called CMS (console monitor system), and CP (control program), which created virtual machines. The latter ran on the mainframe and created virtual machines which in turn ran the CMS, with which the end user would then interact.
This user interaction was a first. People used to feed a program into the computer, it would execute the task and then displayed the output on a screen or print it out. The interactive operating system meant that users could interact with programs while the programs were running.
CP/CMS was first released to the public in 1968, with the first stable release only in 1972. While the traditional approach to timesharing computers involved dividing up the memory and other system resources between users, the CP approach allowed each user to have his own complete operating system, effectively giving him his own computer.
Operating the CP was a lot simpler. Other advantages of using the virtual machine approach included increased efficiency, as virtual machines were able to share the overall resources of the mainframe instead of having the resources split between users. It was more reliable as a single user couldn’t crash the whole system, only his own. This also meant that it was more secure, with each user running a completely separate operating system.
Similar reasons are driving virtualisation today: to make full use of server capacity and simplify data centre management.