• February 1, 2023

Texas Bill: Many Immigrants, H-1B Visa Holders Can’t Buy Property

A bill in the Texas state legislature would prohibit many immigrants and H-1B visa holders from China and three other countries from owning property in the state. Texas Gov. Greg Abbott …

Laid Off? Try The Cannabis Industry

As more states legalize cannabis and new weed markets expand, industry hiring remains strong. A new study from the CannabizTeam, an executive search and staffing firm, estimates there will be about …

Homebuilder Confidence Bump – The Spark That Could Ignite The Next Growth Cycle

The homebuilder positives, particularly the mortgage rate dip and the new home sales and potential buyer traffic bump, might look too small to produce optimism. After all, uncertainties and inflation realities …

“We are the Accelerated Data Center”: GPU’s, CPUs, Networking and Systems.

As anyone not comatose knows, today’s modern data center workloads — like AI, HPC, and machine learning — absolutely demand acceleration. And the appetite for acceleration seems insatiable, in part because CPU’s can no longer keep up with Moore’s Law, but more importantly algorithmic advances in parallelism are opening our eyes to what is now possible, but inconceivable just a few years ago. This trend impacts chemistry, physics, atmospheric sciences, astronomy, medicine, transportation, communications, pharmaceuticals, photography, retail, finance, and entertainment. Have we left anything out? If so, please add it, because accelerated computing is becoming completely pervasive.

There’s another term for accelerated computing: “NVIDIA”. The company has been accelerating workloads and algorithms since it was founded in 1993, and in the data center for over 10 years. Of course success breeds mimicry, and there are now over 100 companies developing hardware to compete with Jensen Huang’s juggernaut. And we hope some will be successful. Competition is good. But even so, the absolute scale of NVIDIA’s impact on the industry is unassailable, a fact other large-scale semiconductor companies don’t like to hear.

About four years ago, Mr. Huang audaciously declared that he intends to be in the optimized data center business. Not chips. Not even servers or switches. But the data center in its entirety. Since then he has delivered, building three of the world’s largest supercomputers from the ground up, using NVIDIA GPU’s and networking. And next year he will add NVIDIA’s own Grace Arm CPU to that story. Thats because accelerated computing requires data center scale innovations and intense focus on software co-design with the hardware.

Advertisement

So, it isn’t hard to imagine what the NVIDIA schedule at HotChips ‘22 will look like next week:

  • Grace – NVIDIA’s first data-center grade CPU
  • Hopper – The next generation GPU with a Transformer engine
  • NVLink – fast networking with some new tricks up its sleeve
  • Orin – SOC – the industry’s go-to smart edge controller.

I’m especially keen to learn how NVIDIA’s new-found penchant for openness might apply to NVLink, which is twice the performance of any contender for chip to chip interconnectivity and could play a new role in the emerging chiplet revolution.

Conclusions

What sets NVIDIA apart is their ecosystem and their design philosophy, where GPU, DPU and CPU act as peer processors, orchestrated by software, to relieve platform bottlenecks and deliver optimal application performance. NVIDIA’s primary competitors AMD and Intel don’t even have a server business; they remain in the component business and so will have to compete there, which is not easy, while NVIDIA touts higher system-level designs and the benefits of end-to-end integration across an accelerated data center.

Until people realize that the game has changed, and it isn’t just about fast chips, they will be unable to challenge NVIDIA.

Advertisement

Leave a Reply

Your email address will not be published.