In the wake of GM and Ford plants idling, Jensen Huang has said the automotive supply chain needs to be re-engineered.
The importance of semiconductors in society has reached such a point that supply chain constraints in the sector are having drastic impacts on other parts of society.
Last week, American auto giants General Motors and Ford said they would idle some of their factories due to a shortage of semiconductors, sending tens of thousands of workers onto approximately 75% pay, the Washington Post reported. It is expected that worldwide production drops will be measured in the millions.
In the wake of such developments, Nvidia CEO Jensen Huang has said the automotive supply chain needs to be re-engineered.
“The automotive industry supply chain has to be reinvented — that’s very clear,” he told journalists this week after delivering the GTC 2021 keynote.
“What the industry experienced was unfortunate, and hopefully in the future, unnecessary.”
Huang is not without a horse in the race, with Nvidia announcing its Atlan automotive processor this week, which is due literally to hit the road in 2025.
“That has the ability to replace at least four of the major ECUs [electronic control units], the most complex ECUs in the car, and unify it in software into one programmable system — I think that that’s the right direction — to take the car industry from integration of a whole bunch of embedded controllers into a software-defined future where the computer inside is much more sophisticated and powerful.”
Even though major tech companies have been hit by the shortages and Nvidia’s latest GPU are rarer than hen’s teeth for consumers, Huang said other sectors were “largely unaffected” compared to automakers.
“All my colleagues in the automotive industry recognise the importance to re-engineer the supply chain,” he said.
“So that it’s much more direct to the source and reduce the number of layers and layers and layers and layers of responsibility, passing, that ultimately leads to the building of a car.”
For supply matters much more closer to Nvidia’s GPU bread and butter, Huang said it was a case of consumers clambering for products made on a “leading edge process” and semiconductor manufacturers were all feeling pressure.
“TSMC and Samsung and Intel are feeling great demand and great pressure,” he said.
“I think that we just have to recognise that leading edge process cannot be a fraction of the overall capacity of the industry, it has to be a larger percentage of it, and I think these leading edge semiconductor companies are aware of that and they’re mindful of that.
“But it will take a couple of years before we get leading edge capacity to the level that that is supportive of the global demand of digital technology.”
The big announcement during the company’s keynote on Monday was its Arm-based Grace CPU aimed at the AI and high-performance markets.
Grace systems will be able to train a one trillion parameter natural language processing model 10x faster than today’s state-of-the-art Nvidia DGX-based systems, which use x86 CPUs. With Nvidia’s 4th Gen NVLink interconnect able to run at 900 Gbps between Grace and the GPUs, which the company said gives 30x higher aggregate bandwidth compared to today’s leading servers.
The first supercomputers from HPE using Grace are slated for 2023.
Due to its language processing capabilities, Huang said he expected the major cloud providers to all be customers, because they have language models that must be kept up to date.
“Language is drifting very quickly and therefore the concept of model decay is a very significant thing,” he said.
“For example … if you asked about ‘pandemic’ two years ago, it would come up with very different results, and very different answers than today.
“You can’t afford to train your models, your language models, very infrequently, you need to make sure you train them very frequently.”
An additional bonus to Nvidia will be that customer support spans every language, and each language demands a different model.
“They’ll be used by insurance companies, they’ll be used by financial companies, they’ll be used by any company with a lot of customer service, and it will have to be replicated for every language, the language of every domain, whether it’s financial services in English, its financial services in Japanese — very different,” Huang said.
“Healthcare in English, healthcare in Russian — very different — and so all of these different domains, every single combination.”
Grace is being manufactured at TSMC using a “very advanced process”.
Nvidia is not a cybersecurity company
Among the slew of announcements on Monday was the Morpheus framework, which is designed to allow real-time packet inspection over all traffic flowing in a data centre when combined with Nvidia’s Bluefield data processors and an EGX analysis node.
“The applications are disaggregated meaning a single application doesn’t run on one computer, it runs on many computers. And the way they communicate is … unsecured,” Huang said.
“The combination between the fact that you’re cloud native, you’re hybrid cloud, and the fact that your data centre is disaggregated, exposed the inside of the data centre tremendously, and you have to assume that the intruder is already inside.
According to the Nvidia founder, inspecting every packet in a data centre would not be possible without the company’s hardware and AI chips, but that does not mean the company is getting into the cyber game itself.
“We create this end-to-end system, we create the platform, and then cybersecurity companies … they’re so excited about this because finally they have the system necessary to deploy their cybersecurity algorithms — and that’s what they do,” he said.
“We’ll create a platform, think of it as a computer system, and they provide the applications and services, and so we’re not a cybersecurity company, but we’re going to be a computing company that enables a computing platform that enables cybersecurity.”
Those working with Nvidia on Morpheus include Cloudflare, F5, Fortinet, Canonical, Red Hat, and VMware.