Many people born in the XXI century believe that cloud technologies appeared during their generation. In fact, the history of "clouds" has been going on for at least six decades, and there have been periods of stagnation and stages of rapid development. The basis for them was laid, as it happened with a number of other technological innovations, by the military. And yet, under what circumstances did cloud technologies originate and how did they develop? About this — in our article today.
From Military Innovation to Modern Technology: The Birth of Cloud Computing
Exactly sixty years ago, in 1963, the Office of Advanced Research Projects (DARPA), acting, as you know, in the interests of the US Department of Defense, allocated a grant of two million dollars to the Massachusetts Institute of Technology for a very interesting development. It received the name Project on Mathematics and Computing, abbreviated as Project MAC, although, of course, it had nothing to do with "macs" (in the modern sense). The goal of the researchers, whose group was headed by cybernetics scientists Robert M. Fano and Fernando Jose Corbato, was to develop a fundamentally new time-sharing system designed to organize shared access to the resources of electronic computers to several remote users. The reason for the interest in such technologies on the part of the military department, in general, is obvious: computers at that time were expensive, and there were relatively few of them, and the need for computing was growing rapidly.
Project MAC was based on the experimental Corbato Compatible Time-Sharing System (CTSS) created by Fernando Jose Corbato a few years earlier, which allowed users of several terminals connected to a computer to access the same program running on such a machine. The CTSS code was reworked and improved, as a result of which, within six months, 200 users in ten different MIT laboratories were able to connect to one computer and centrally run programs on it. This event can be considered a starting point in the history of the development of cloud technologies, since within the framework of this experiment, the main underlying principle was implemented — a multi-user mode of on-demand access to shared computing resources.
UNIX, Multics, and Plan 9: Crucial Milestones in Cloud Technology
By 1969, based on the MAC project, Bell Laboratories and General Electric had created a multi-user time-sharing operating system called Multics (Multiplexed Information and Computing Service), which was based on the principles laid down by MIT scientists. In addition to organizing access to applications, Multics provided file sharing, as well as implemented some security functions and data protection from accidental damage. It was on the basis of this system that Ken Thompson created the first version of UNIX in November 1971, which for many years became the most popular multi-user operating system in the world.
We can say that since the advent of UNIX, there has been a lull in the history of cloud technology development for some time, since this system fully met the requirements of both numerous commercial enterprises and educational institutions where it was used in the educational process. The lull lasted exactly until Bell Laboratories itself decided to replace UNIX with a more modern OS, focused primarily on the sharing of hardware and software resources.
This decision was another step towards the development of modern cloud technologies. The new Plan 9 system, developed by a team of UNIX creators led by Ken Thompson, made it possible to fully work with files, file systems and devices, regardless of which computer connected to the network they are physically located on. De facto, this OS turned the entire computer network into one global multi-user computing system with shared resources, which were accessed on demand and in accordance with user rights — that is, in fact, into a kind of modern "cloud".
Virtualization and the Evolution of Cloud Systems
The next important step in the evolution of cloud systems was the emergence of virtual machines. The idea of virtualization originated shortly after the advent of multi-user time-sharing systems. In the mid-60s, experiments with technologies that can be classified as the first hypervisors were conducted at the IBM Research Center of Thomas J. Watson in Yorktown Heights, where the IBM M44/44X experimental system was developed. On the basis of the IBM 7040 computer, the researchers created several independent computer emulators in which their own instances of programs could be launched and run in parallel in their own isolated environment.
Around the same time, other IBM departments were experimenting with the IBM CP-40 machine: a specially modified IBM System/360 Model 40 computer. Isolated hardware and software containers were developed for this computer, in which an operating system with time-sharing and virtual memory CMS (Cambridge Monitor System) could work. This OS was created by employees of the Cambridge IBM Research Center (CSC) in close collaboration with researchers at the Massachusetts Institute of Technology who worked on the MAC project. Such virtual machines were called a "pseudo-computer" in Cambridge, and the main goal of the project was to implement the collaboration of several users with virtual memory.
Application virtual machines, or so-called managed runtime environments (MREs), have been developing in parallel, providing high-level abstraction for various programming languages. Their main task was to provide a platform-independent programming environment that abstracts from the underlying hardware or operating system and allows the application to run the same way on a machine with any hardware configuration. Such a "virtual machine" serves only one process, runs as a normal application inside the main OS and terminates when this process exits.
The appearance of the first MRE dates back to 1966 — this technology was used by the BCPL object code compiler. Later, the use of object code, which can later be translated into executable code for various hardware architectures, was adopted by compilers of other high-level languages. Managed runtime environments have developed rapidly, and the apogee of their evolution is considered to be the appearance of the Java Virtual Machine. Which, in turn, pushed Bell Labs to suspend the development of the Plan 9 project and start developing the Inferno OS instead, in the core of which the Dis register virtual machine was used.
The Inferno operating system could do all the same things as Plan 9, but the use of virtualization brought the sharing of hardware and software resources to a fundamentally new level. The developers assumed that Inferno would be able to fully work on various devices and hardware platforms without additional adaptation, launching the same applications and providing users with the same functionality. In fact, this platform, created in 1996, became the first truly cloud-based operating system in history, at least in its architecture. Although it has not found wide application, Inferno has laid a solid foundation for the further development of distributed cloud technologies.
It is believed that the term "cloud computing" first appeared in the same 1996 in the internal documentation of Compaq. It was used to refer to distributed data processing in local networks, although some researchers believe that it appeared much earlier in a number of academic papers. In particular, Computerworld magazine mentioned in one of its publications that the phrase "cloud computing" was used back in the 60s by J. K. R. Licklider, the first director of the ARPA Information Processing Technology department.
A certain contribution to the development of cloud technologies was made by Apple, which launched the Paradigm project in 1989. As part of this project, it was planned to develop a fundamentally new operating system capable of distributing a typical computing load between multiple devices simultaneously operating on the network. The main platform for such an operating system was to be pocket personal computers, which at that time did not yet have significant computing power, but were considered the most promising vector for the development of computer technology.
Inside Apple, the project did not receive the proper support of the head John Scully, so soon a separate company General Magic was created for its further development, and the operating system itself received the working name Magic Cap. The interface of this OS used the metaphor of a building: for example, a calendar and an email client could be found in an "office space", and games and other entertainment could be found in a "living room". For the development of user applications, the Magic Script language specially created by General Magic was used: an object-oriented C dialect. Unfortunately, the Magic Cap distributed operating system did not receive further development, but a number of its ideas and concepts were used in the architecture of the Apple Newton PDA, and much later they were reflected in iOS.
Since about the second half of the nineties, another lull has come in the history of cloud technology development, which was interrupted with the beginning of the era of mass distribution of cheap and ubiquitous Internet access. Which, in turn, caused the emergence of a huge number of complex and multifunctional web applications that required significant server capacity for their work. And already in the first half of the "noughties" there was a tendency for the gradual transformation of local software installed directly on the device into "cloud" programs operating on the principle of SaaS — Soft as a Service. All this combined breathed a second life into the idea of cloud computing.
The Emergence of Public Cloud Services: Amazon, Google, and Microsoft
The first provider to offer users a public cloud service was Amazon, which launched the Simple Storage Service, or Amazon S3, on March 14, 2006. The company, which built powerful data centers for internal needs, considered that it was quite able to share computing power with its customers by leasing them. Initially, Amazon opened access to object cloud storage with a web interface, the capabilities and functionality of which gradually grew and developed. Already in October 2007, more than 10 billion objects were stored in the Amazon cloud, in January of the following year their number exceeded 14 billion, and then began to grow exponentially: by the end of 2008, the storages already contained 29 billion objects, a year later — 64 billion, in March 2010 this number reached 102 billion, and by March 2021 exceeded 100 trillion objects.
On April 7, 2008, another giant of the IT industry joined Amazon with its Google Cloud Platform project, and a little later, in October 2008, Microsoft Corporation joined with the Azure project.
Now it is virtually impossible to imagine the global IT landscape without "clouds": cloud services allow customers not only to flexibly manage infrastructure, save on equipment, backup and technical support, but also to access a huge number of services, applications and data from anywhere in the world. At the same time, all modern capabilities of cloud technologies are based on three fundamental pillars: the separation and sharing of resources, virtualization and the possibility of remote access to these resources via the Internet. It is noteworthy that all three of these components originated in the early sixties of the last century in the research laboratories of MIT, IBM, Bell Labs and other technology companies, many of which were funded by ARPA. The current technological level, which in 1963 would have seemed like a real fantasy to scientists, is, by and large, just a logical development of the scientific and technical base laid down 60 years ago.