Skip to main content
BlogsRevised

Supercomputer Evolution: Over 60 years Of History

By May 29, 2014No Comments

 

 

The U.S. Army’s ENIAC Supercomputer circa 1945 (computerhistory.org)

The building of supercomputers began in earnest at the end of WWII. Starting out with the United States Army’s 1945 huge machine called ENIAC (Electronic Numerator, Integrator, Analyzer, and Computer), information was collected and processed like never before. The sheer size of ENIAC was quite impressive as well: it weighted 30 tons, took up 1,800 square feet of floor space, required six full-time technicians to keep it fully operational, and completed 5,000 operations per second.

Flash forward a few years. Non-military and government supercomputers were introduced in the 1960s by data pioneer Seymour Cray at the Control Data Corporation (CDC) and slowly began to become popular with a handful of large corporations. While the supercomputers of the 1970s used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of “off-the-shelf” processors were the norm. Supercomputers of the 21st century can use over 100,000 processors (some being graphic units) connected by ultra-fast connections. (Hoffman, 1989; Hill, Jouppi, Sohi, 2000).

Supercomputers are rapidly evolving and although more advances have been made since the start of the new millennium than in any other time in history, we still thought it would be interesting to take a look at the history of supercomputers beginning in the 1960s, when computer use became more prevalent. Below is a decade-by-decade breakdown of the major milestones since then.

1960s:

Computers were primarily used by government agencies up until the early 1960s. They were large “mainframes” that were stored in separate rooms (today, we call them “datacenters). Most of these computers cost about $5 million each and could be rented for approximately $17,000 per month.
Later on during the 60s, as computer use developed commercially and was shared by multiple parties, American Airlines and IBM teamed up to develop a reservation program named the Sabre system. It was installed on two IBM 7090 computers located in New York and processed 84,000 telephone calls per day (rackspace.com).
Computer memory slowly began to move away from magnetic-core devices and into solid-state static and dynamic semiconductor memory. This greatly reduced the size, cost, and power consumption of computers.

1970s:

Intel released the world’s first commercial microprocessor, the 4004 in 1971.
About this same time, datacenters in the U.S. began documenting formal disaster recovery plans for their computer-based business operations (in 1978, SunGuard developed the first commercial disaster recovery business located in Philadelphia).
In 1973, the Xerox Alto minicomputer (later becoming servers) became a landmark step in the development of personal computers because of its graphical user interface, bit-mapped high-resolution screen, large internal and external memory storage, mouse, and special software (rackspace.com).
The world’s first commercially-available local area network, ARCnet, was put into service in 1977 at Chase Manhattan Bank in New York. At the time, it was the simplest and least expensive type of local area network using “token-ring” architecture, while supporting data rates of 2.5 Mbps and connecting up to 255 computers.
Mainframes required special cooling and during the late 1970’s, air-cooled (and newer, smaller) computers moved into offices, essentially eliminating the need for them.

1980s:

The 1980s were highlighted by the boom of the microcomputer (server) era due to the birth of the IBM personal computer (PC).
Computers gained popularity with the public as well as the academic community and beginning in 1985, IBM provided more than $30 million in products and support over the course of five years to a supercomputer facility established at Cornell University in Ithaca, New York.
In 1988, IBM introduced their application system, AS/400, and it quickly became one of the world’s most popular computing systems, especially within the business realm. As the decade came to a close and information technology operations started to grow in complexity, companies became aware of the need to control their IT resources.

1990s:

Microcomputers, now called “servers,” started to find their places in old computer rooms and became referred to as “data centers.” Companies began assembling server rooms within company walls which provided them with inexpensive networking equipment.
The boom of data centers came during the dotcom bubble. “Companies needed fast Internet connectivity and nonstop operation to deploy systems and establish a presence on the Internet.” (rackspace.com).
Many companies started building very large facilities to provide businesses with a wide range of solutions as well as creating systems for deployment and operation. It became a growing trend (and an important one) and these “datacenters” eventually became crucial to businesses large and small to serve a variety of needs including big data storage, security, and much more.

2000’s:

As of 2007, the average data center consumed as much energy as 25,000 homes. Throughout the 2000’s, the number continued to grow quite considerably, especially with data centers rapidly gaining in size. In 2013, it was estimated that one data center could use approximately enough electricity to power 177,000 homes (science.time.com). During this time, data centers accounted for approximately 1.5% of all U.S. energy consumption and the demand was growing at a rate of roughly 10% per year.

Online data continued growing exponentially, with roughly 6 million new servers deployed yearly, a far cry from the past. Microsoft alone had over 1 million servers and Google had approximately 900,000 (extremetech.com).

Many data centers began stepping up their energy efficiency efforts and are starting to “go green.” For example, in 2011, Facebook launched the OpenCompute project, providing specifications to their Oregon data center that uses 38% less energy to do the same amount of work as their other facilities. This also saves money, costing Facebook 24% less.

As online data began growing exponentially, there was opportunity as well as a need to run more efficient data centers

2020 & Beyond

Calculation speed is the name of the game in today’s generation of supercomputers, known as “exascale”. Fast is really an understatement. To put things into perspective, one second of an exascale computer’s work would take every person on earth 4 years to calculate – working 24 hours a day for that four years. (Bernard Marr, 2020). What this means is that the capabilities extend far beyond tech and business. Calculating power of this magnitude can help us more accurately predict climate patterns as well as make huge strides in AI and even manufacturing. 

Expectations for these modern-day supercomputers also include energy efficiency and sustainability. Not only are we looking bigger, faster, and stronger – we are adding “greener” into that mix as well.

No matter how you look at it, exascale computing is quickly changing the landscape across the board. Faster, more efficient calculating power means the possibilities are truly endless and what once seemed out to reach is very much attainable today.

—–
Sources:
rackspace.com
wikipedia.org
time.com/science/
extremetech.com
computerhistory.org
linkedin.com