Nvidia GTC 2025: GM and Nvidia Go Deeper in AI, the Omniverse, and Manufacturing–Autonomous Tech Partnership
Tracking the intersection of high performance tech and the automotive industry at Nvidia’s annual tech conference.
MotorTrend is live and direct from Nvidia’s GPU Technology Conference (GTC) 2025. Why? Because we came last year and learned a bunch from Ford, Jaguar Land Rover, Lucid, Mercedes-Benz, Polestar, and Volvo. And, unless you've been living under a rock for the last two years, you should be aware that Nvidia is taking the world by storm with its super high performance Graphical Processing Units (GPUs) which are powering advanced driver-assistance and autonomous-driving systems and the AI assistants that are coming to new cars and trucks around the world. The tagline for GTC 2025 is "What's Next in AI Starts Here" so let's see if that is indeed the case...
0:00 / 0:00
GTC 2025 Keynote with Nvidia Founder and CEO Jensen Huang
From Agentic to Physical AI, Nvidia + GM Tie Up, Blackwell GPU Successors and a Robot Named Blue.
A full ninety minutes before Nvidia founder and CEO Jensen Huang was set to deliver the keynote address at GTC 2025, the line to get into San Jose’s SAP Center was rumored to be over a mile long. When it was time to begin, Huang, clad in his trademark leather jacket, stepped out onto an elevated stage, in front of a massive screen spanning the width of a floor normally occupied by San Jose’s professional hockey and basketball teams, and repeatedly fired a T-shirt cannon into the audience of thousands of cheering fans.
This is the artificial intelligence (AI) universe we live in, where tokens are traded for answers, performance is measured in trillions of operations per second, gen AI is the past, agentic AI is the present, physical AI is the future, and all that is shiny and new is painted dayglo green. Huang is at the center of this universe, and has regularly coopted intergalactic terminology for Nvidia products ranging from Omniverse, Cosmos, and, of course, the original GeForce graphics chip.
We drank from the GTC firehose in 2024, so this year we were not surprised to see the charismatic and seemingly inexhaustible Huang go on a long and meandering backgrounder on Nvidia’s historical and philosophical approach to AI before dropping updates on the company’s latest hardware and software in rapid succession. Most of Huang’s two-and-half-hour keynote address flew over our heads and into the mind space frequented by software engineers, data scientists, IT and networking professionals, but Huang did break one piece of auto industry news, by announcing a deeper AI partnership between Nvidia and General Motors across AI, automotive manufacturing and self-driving vehicles, which will will cover in depth shortly.
In addition to that news, Huang also announced:
- That the Blackwell GPU announced at GTC 2024 is in full production and represents a $1 trillion(!) computing inflection point.
- One of Huang’s biggest product announcements was the Blackwell NVL72, which is essentially a supercomputer comprised of 36 Nvidia Grace CPUs and 72 Blackwell GPUs, that is capable of handling large language modes with trillions of parameters.
- Nvidia’s roadmap beyond Blackwell, by announcing the several more faster and more powerful chips, starting with Vera Rubin, named after the astronomer, coming in the second half of 2026, Rubin Ultra which will come a year later. Huang also briefly mentioned the successor to Vera Rubin coming in 2028 and called “Feynman”, presumably after the famed physicist Richard Feynman.
- A couple of consumer grade products, including laptop AI supercomputer called DGX Spark
To close it all out, Huang chatted up a $500 billion opportunity in AI infrastructure for business and a $10 trillion AI opportunity in industrial robotics, before announcing a partnership between Nvidia, Disney Research, and Google Deep Mind, and bringing on stage a super cute, Star Wars-styled droid named Blue, who bopped around the stage and mostly ignored Huang’s commands. In all, it was another commanding keynote by Huang, who dizzies and dazzles with his assured descriptions of our AI infested future, and leaves one to wonder not if the world as he describes will actually to come to pass, but if there is any chance that it won't.
How AI is Fueling Innovation in Automotive Software and Manufacturing
GM SVP of Software and Services Engineering David Richardson chats with Nvidia VP of Automotive Enterprise Norm Marks
While Jensen Huang used his GTC 2025 keynote address to announce a deepening of the relationship between Nvidia and General Motors that should impact the design and manufacturing of future GM vehicles and the advanced driver assistance systems found inside, it was up to David Richardson, SVP of software and services engineering at GM and Norm Marks, Nvidia's VP of automotive enterprise to get into the details.
Per GM, Nvidia GPUs are already used for AI model training, and this capacity will be further leveraged via the Nvidia's “digital twin” technology called Omniverse. The concept of digital twin is exactly as it sounds; a detailed digital representation of a physical object or process, usually in a detailed virtual environment that mimics the real world. It has been an emerging domain of software-defined vehicles and particularly influences not just the design and production of cars and trucks, but increasing the design and production of the automotive factories themselves.
GM’s announcement that they will be utilizing the Nvidia Omniverse platform means it should be able to create exacting digital twins of factories and production lines, to run simulations on vehicles being manufactured, to find and fix inefficiency and waste. Or to help in the pursuit of world-beating performance, as Richardson explained. "With the new Corvette, we did a bunch of work with the Nvidia toolset in the Omniverse to model out the performance of the engine, before we set the speed record of 233 mph in Germany. It was really important as we only had 3 hours on the track," said Richardson.
Inside the car, GM announced that it will be utilizing Nvida Drive AGX hardware in “next-generation” vehicles, for advanced driver assistance systems (ADAS) and future autonomous driving systems. When asked which next generation vehicles these may be, Richardson declined to answer. When asked whether the use of Drive AGX hardware was limited to only EVs, Richardson replied that the company was "exploring ICE options."
Why This Is a Big Deal
Near the end of the conversation between Richardson and Marks, the topic of GM's approach to autonomy came up, and Richardson explained how GM had recently made the decision to shut down its Cruise robotaxi startup and combine key staffers with the GM team responsible for Super Cruise (our 2025 Best Tech award winner). "We’ve pulled both teams in house, into a single autonomy organization," said Richardson. The goal? To deliver L3 autonomous driving capability. With the right people and Nvidia's hardware, GM's path to autonomy looks a lot more clear.
How Waymo is Advancing AI to Build the Most-Trusted Driver
Dragomir Anguelov and the Waymo Foundation Model
When Dragomir Anguelov talks, it's best to listen. Anguelov is the head of the Waymo's newly formed AI foundations team, and an inaugural winner of MotorTrend's SDV Innovator Award, back in 2023. More recently, Waymo won our 2025 Best Tech award in the robotaxi category, which Anguelov validated during his GTC session with an update on the performance of the service.
Per Anguelov, the Waymo's robotaxi service has 200,000 paying customers in four cities: San Francisco, Phoenix, Los Angeles, and most recently Austin, Texas. Service will continue to expand to Atlanta, Georgia, and the San Francisco bay area cities of Palo Alto, Los Altos, and Mountain View. Tweaks to the service include the ability to hail a Waymo robotaxi via Uber (only in Austin) and take Waymo to the airport if you're in Phoenix. And, for the car nerds out there, the big news is that reinforcements are on the way to support the Jaguar I-Pace robotaxis, in the form of sixth-generation Waymo robotaxis made from the Zeekr RT and Hyundai Ioniq 5. Like the I-Pace, these vehicles will have a full complement of sensors, including 13 cameras, 6 radar and 4 lidar arrays, capable of delivering "full surround visibility for up to 500 meters."
More impressive than the vehicles and hardware was Anguelov's readout of Waymo's safety performance. After 50-million miles driven, Waymo estimates that its robotaxis had 83 percent fewer crashes in which an airbag deployed, 81 percent fewer crashes that resulted in an injury, and 64 percent fewer crashes that resulted in a police report, when compared against humans driving the same distance in the cities in which Waymo operates.
But getting to this point in autonomous driving, and fulfilling Waymo's mission to be "the most trusted driver" is very difficult, says Anguelov. To do so, Waymo had to go beyond using AI tools in a virtual or laboratory environments, and build its own version of what is known as embodied AI; that is AI that is fundamental tied to physical presence and interaction. Waymo Driver, as this embodied AI is called, learns from the data collected by its camera, radar and lidar sensors to inform its foundation model for perception and knowledge of the visual world. Combining this perception model with another AI foundation model trained on vast amounts of driver behavior and simulations creates the optimal autonomous driving system. But one that needs to be continuous validated, which requires tremendous computing power, huge data sets, and simulators that can scale up to include all manner of driving scenarios that could be encountered in the real world, from crazy weather, crazy drivers, or simply crazy, unthinkable situations.
The New Role of AI in Electric Vehicles
Rivian founder and CEO RJ Scaringe joins Nvidia's Rishi Dhall to chat about how AI is changing automotive NOW.
Hot off winning two of MotorTrend's Best Tech awards, for its mobile app experience and Gear Guard security system, and sponsoring the SXSW convention in Austin, where it dropped an OTA update that upgrades its semi-autonomous driving system, Rivian was in town for GTC, with founder and CEO RJ Scaringe sitting down with Nvidia VP of automotive business Rishi Dhall. Here are some highlights from their fast moving and free flowing chat:
When questioned about rising China and the relatively low rate of EV adoption in the U.S., the consistently unflappable Scaringe stuck to his favored talking points (as he did on our InEVitable podcast last August), citing the lack of compelling and affordable EVs as the main reasons for America's resistance. "We are going to need a lot more choice," said Scaringe, before talking up Rivian's forthcoming R2 SUV which should hit the market in that $45,000 sweet spot, just below the average new car transaction price in America. While acknowledging China has several key advantages in auto manufacturing, including low labor costs and government support, Scaringe thought these cost advantages would fade over time, primarily due to factory automation.
When asked to reflect on the challenges of starting a car company from scratch, Scaringe replied: "You need to be intectually honest with how hard it is build a car company. Today, you can't do it for a small amount of money." Scargine went to say that exacerbating this challenge is the very large supply base and very messy approach to software, that has evolved over decades of established manufacturing processes. "It's very hard to break the system," said Scaringe.
One particularly insightful story Scaringe relayed concerned supply chain. Each vehicle Rivian makes, the R1T truck, R1S SUV and commercial van, have approximately 30,000 parts. During their launch in 2021, Scaringe said they faced shortages on 2 percent of the components. Of course, missing any component impacts a car company's ability to produce fully-finished vehicles, but it wasn't just the delays to completed Rivian trucks and SUVs that was the problem, it was the warehousing crisis that emerged for the 98 percent of parts that had shown up on time. "Our supply chain ramp team was forged in this crucible of pain," said Scaringe.
With robotics always a hot topic at GTC, Dhall asked Scaringe if a humanoid robot is on Rivian's roadmap, noting one of Jensen Huang's favorite sayings, that "the best people to build robots are the car people" because cars are just a different kind of robot, for the road. Scaringe gave a curious, cryptic answer, essentially proposing that while humanoid robots have advantages in the real world now, there may be other kinds of robots better suited to our needs in the future, but no solid indication that a Rivian robot (beyond a self-driving car) was on the way.
When asked about Rivian's approach to AI and Nvidia's technology in particular, Scaringe reiterated the key upgrades that came in the second generation R1T and R1S vehicles—the reduction of the number of ECUs from 17 to 7, removal of over a mile and half of copper wiring in the harness, and addition of Nvidia Drive Orin computer processors (for coming upgrades to ADAS and semi-autonomous driving systems). He went on to outline how "The roadmap is: everything grows" which is nod to how Rivian will need more and more processing power, in the form of GPUs, for its offline training models, especially as the capability improves.
To close out the session, Dhall went through a series of rapid fire questions, in which Scaringe had only 5 seconds to answer each question. What feature do you most want for Rivian 5-years in the future and tomorrow? Scaringe's answer was the same—self driving technology. And regarding what feature he'd like to have in his cars that is currently unavailable now, Scaringe requested "incredibly powerful inference platforms that can enable natural interactions." The example he gave was instead of asking an AI to find you a restaurant, and laboriously going through the results, he hopes for a future where you can simply tell the car, "I'm hungry" and it will do the rest, based on what's available, what it knows about you and your habits.
Unlocking the Power of AI Agents in Automotive
Agentic AI with Ford's Bryan Goodman
To kick off his session on the use of agentic AI, Ford’s executive director of AI, Bryan Goodman quickly revisited the predictions he made for the future of AI at GTC 2024. Perhaps not surprisingly, given his position, he was pretty accurate, scoring four out of five:
- AI indeed progressed to more conversational interfaces for data and knowledge.
- There are many more AI chatbots; Goodman cited that Ford went from a few chatbots in 2024 to over 200 in use in 2025.
- AI has accelerated development, evaluation and testing.
- AI has become valuable organizational tool.
The growth of more proactive AI assistants is the one of his predictions that has not come to pass, but appears to be primed to explode in 2025, as AI development continues to accelerate at a breathtaking pace. As Jensen Huang noted in his keynote addresses in 2024 and 2025, we are through the era of generative AI; the present belongs to agentic AI, and the future is apparently all about physical AI (essentially robots with AI on board).
Before covering how Ford is building and using AI agents, Goodman set up the rules of engagement, by first defining the key terminology and Ford’s ethical principles and privacy policies. Per Goodman, Ford defines an agent in the context of AI as an application that can plan, reason, use tools and execute tasks for users.
When it comes to ethics of using AI, Ford’s “northstar” is tied directed to the company’s mission statement:Our AI use is aligned with Ford’s purpose, to help build a better world, where every person is free to move and pursue their dreams. While a bit hokey sounding (like most mission statements), creating and saying these words often and out loud should be encouraged of all companies, large and small, especially as they explore the worlds of AI.
In terms of where Ford uses AI, Goodman cited six areas: design, engineering and testing, supply chain, manufacturing, customer support and back office, and covered the challenges and opportunities of three of them.
With vehicle design, the problem is essentially the relatively slow pace of moving from design sketches to 3D models. Vehicle design has traditionally followed time-honored practices of hand sketches through clay modeling, but Ford has found ways to speed up some of the newer processes, using AI agents that can take simple sketches and render realistic two- and three-dimensional imagery. And these agents can accelerate every aspect of vehicle design, from the exterior design of the car, to the interior, to accessories like wheel designs.
Agentic AI supports engineering and testing at Ford by improving the speed and accuracy of engineering tests through scale. Goodman used an example of a physics-based aerodynamics simulation that would take over 15 hours in the real world, but could be completed in under 10 seconds via a neural network, while delivering an error margin of only 2.3 percent. Goodman also discussed how AI agents assist in the creation of accurate engineering models and critical documentation, by proofreading, fact checking, and even writing engineering specifications.
Goodman's final example of agentic AI was in the realm of customer support, and a subject near and dear to our hearts at MotorTrend: how to navigate the immense amount of automotive information available across OEM sites, from corporate homepages to dealership sales sites. We've all gone searching for specific bits of information about a car on the web. Maybe it was for the pressure of the spare tire, or the specific weight or brand of oil for an oil change. But finding and retrieving the right answer can be maddening. Agentic AI chatbots that can comb through many different sources of information and evaluate the results before delivering an answer can deliver results more simply and with greater accuracy. As a side benefit, these AI agents can learn about the customer, from what issues they may be having to the knowledge that they seek.
I used to go kick tires with my dad at local car dealerships. I was the kid quizzing the sales guys on horsepower and 0-60 times, while Dad wandered around undisturbed. When the salesmen finally cornered him, I'd grab as much of the glossy product literature as I could carry. One that still stands out to this day: the beautiful booklet on the Mitsubishi Eclipse GSX that favorably compared it to the Porsches of the era. I would pore over the prose, pictures, specs, trim levels, even the fine print, never once thinking that I might someday be responsible for the asterisked figures "*as tested by Motor Trend magazine." My parents, immigrants from Hong Kong, worked their way from St. Louis, Missouri (where I was born) to sunny Camarillo, California, in the early 1970s. Along the way, Dad managed to get us into some interesting, iconic family vehicles, including a 1973 Super Beetle (first year of the curved windshield!), 1976 Volvo 240, the 1977 Chevrolet Caprice Classic station wagon, and 1984 VW Vanagon. Dad imbued a love of sports cars and fast sedans as well. I remember sitting on the package shelf of his 1981 Mazda RX-7, listening to him explain to my Mom - for Nth time - what made the rotary engine so special. I remember bracing myself for the laggy whoosh of his turbo diesel Mercedes-Benz 300D, and later, his '87 Porsche Turbo. We were a Toyota family in my coming-of-age years. At 15 years and 6 months, I scored 100 percent on my driving license test, behind the wheel of Mom's 1991 Toyota Previa. As a reward, I was handed the keys to my brother's 1986 Celica GT-S. Six months and three speeding tickets later, I was booted off the family insurance policy and into a 1983 Toyota 4x4 (Hilux, baby). It took me through the rest of college and most of my time at USC, where I worked for the Daily Trojan newspaper and graduated with a biology degree and business minor. Cars took a back seat during my stint as a science teacher for Teach for America. I considered a third year of teaching high school science, coaching volleyball, and helping out with the newspaper and yearbook, but after two years of telling teenagers to follow their dreams, when I wasn't following mine, I decided to pursue a career in freelance photography. After starving for 6 months, I was picked up by a tiny tuning magazine in Orange County that was covering "The Fast and the Furious" subculture years before it went mainstream. I went from photographer-for-hire to editor-in-chief in three years, and rewarded myself with a clapped-out 1989 Nissan 240SX. I subsequently picked up a 1985 Toyota Land Cruiser (FJ60) to haul parts and camera gear. Both vehicles took me to a more mainstream car magazine, where I first sipped from the firehose of press cars. Soon after, the Land Cruiser was abandoned. After a short stint there, I became editor-in-chief of the now-defunct Sport Compact Car just after turning 30. My editorial director at the time was some long-haired dude with a funny accent named Angus MacKenzie. After 18 months learning from the best, Angus asked me to join Motor Trend as senior editor. That was in 2007, and I've loved every second ever since.
Read More







