Scaling Computers and the Future of Computing Power
The computing world is continually changing, with new advancements in computers and related technology. One aspect of this evolution is the scaling of computers, which has changed significantly over the years. In this article, we’ll dive into the future of computing power and explore recent computing achievements, including their cost implications and computation power.
Person of Compute
The term ‘person of compute’ is becoming more common in the modern computing world. For formal definitions of ‘person of compute,’ let us define one person of compute as 20 PFLOPS (64 A100s, or a single dense 42U A100 rack). Today, we are in the era of the one-rack person, which consumes about 30kW to provide those 20 PFLOPS.
The LLaMA Project
The LLaMA project is perhaps one of the newest developments in computing, achieving tremendous scalability in modeling neural networks on an AI chip. LLaMA was trained on a cluster of 2048 A100s, each with approximately 312 TFLOPS. This size of clusters is currently the most A100s that can work together on a model due to the switch topology.
At 639 PFLOPS, this cluster represents 32 people of compute. Even more notably, Large LLaMA used ~1M GPU hours to train, equivalent to using the cluster for 500 hours (3 weeks). This achievement was 2 person-years of work powered by 32 people for 3 weeks. This is as long as we remain within human-scale computing; such units make sense.
Scaling Computers Based on Moore’s Law
The adage that computing power doubles every two years is almost becoming practically hissing in the modern world. When scaling computers, one way to think about it is how many Moore’s laws it gets you over a desktop. Today, a desktop has approximately 50 TFLOPS and can be considered a mouse of compute, one 400th of a person or nine more Moores.
The most recent NVIDIA numbers reflect impressive advancements:
– 1080 (2016) = 11.3 TFLOPS
– 2080 (2018) = 14.2 TFLOPS
– 3090 (2020) = 35.6 TFLOPS
– 4090 (2022) = 82.6 TFLOPS
The 4090 has cheated a bit by using tons of power, but overall we are on track for a doubling every two years. So, today, you can buy two years by doubling your budget. Facebook is log2(2048) = 11 Moores, or ~16 years ahead (you can afford an 8 GPU box, right?).
Future Computing Power Summit
Google is claiming they have a 9 exaflop (450 person) computer. This purchase represents eight more years of computer advancement and provides a computing power equivalent to a computer from the year 2046.
The Cost Expenditure
On a tight budget, a person of compute costs about $250k today, which reflects a $115M computer. The most expensive thing humanity has built is the ISS at $100B. If we were to build a computer at that scale, it would cost 400,000 people. That equates to Tampa of compute, which is presently the most extendable computing capacity humanly attainable.
A Glimpse into the Future of Computing Power
In 24 years, the cost of a Humanity will be equal to the cost of the ISS. In 44 years, future Google will have a Humanity. In 54 years, normal-sized clusters will constitute a Humanity. Finally, in 66 years, you will have a Humanity under your desk.
Conclusion
The future of computing power is bright and exciting. Keeps abreast with latest advancements and engage in conferences related to computing power to know more about the latest technologies and possibilities in the world of computing. This way, you will be ready to handle any future challenges and enhance your creative skills in computing.
GIPHY App Key not set. Please check settings