At GTC 2025, Nvidia unveiled a new range of “AI personal supercomputers” that are fuelled on the Grace Blackwell chip platform where the Chief Executive Officer Jensen Huang of Nvidia addressed worries that the expense of AI computing is out of control during his yearly developer conference.
In his keynote address on Tuesday, Jensen Huang, the founder and CEO of the semiconductor business, introduced the two new devices, DGX Spark (formerly known as Project Digits) and DGX Station. Users will be able to develop, test, and execute AI models in various sizes at the edge thanks to the computers.
During the presentation, Huang declared, “This is the computer of the age of AI.” “This is the ideal appearance and functionality of computers in the future. Additionally, we now offer a whole array of corporate devices, ranging from workstations to teeny, tiny ones. Huang also went further to present more potent processors and associated technologies during the event, claiming that they will provide consumers a more obvious return on investment. The range consists of the Blackwell Ultra, Nvidia’s flagship AI processor’s successor, as well as other generations through 2027. Additionally, Huang introduced Dynamo-branded software that would optimize current and upcoming machinery to increase its profitability and efficiency.
During a roughly two-hour presentation at the company’s annual GTC event in San Jose, California, Huang stated, “It is essentially the operating system of an AI factory.” The talk covered a wide range of topics, from personal supercomputers to robot technologies.
The GB10 Grace Blackwell Superchip in DGX Spark allows for up to 1,000 trillion AI computations per second, according to Nvidia. The DGX Station is equipped with 784GB of RAM and Nvidia’s GB300 Grace Blackwell Ultra Desktop Superchip.
While DGX Station is anticipated to be delivered later this year through manufacturing partners including Asus, Boxx, Dell, HP, and Lenovo, DGX Spark is now accessible.
“There will be AI agents everywhere,” Huang added. “There will be a fundamental difference between how they run, what businesses run, and how we manage it. Therefore, a new line of computers is required. And this is it.
Once a little-known developer meeting, the conference has gained a lot of attention since Nvidia took the lead in AI, and Wall Street and the IT industry have taken note of the presentation. During his address, Huang presented a range of hardware, software, and services; nonetheless, investors were not presented with any shocking disclosures. On Tuesday, the stock ended Tuesday’s trading session down almost three percent.
Originally concentrating on computer gaming processors, Nvidia has grown into a tech giant with a wide range of activities. Huang stated at the occasion that the second half of 2025 will see the release of the new Blackwell Ultra CPU range. In the second half of 2026, a more significant update known as “Vera Rubin” will take its place.
Among the announcements were:
- A collaboration with General Motors Co. that will integrate AI into robotics, factories, and next-generation automobiles.
- Dell Technologies Inc., HP Inc., and other manufacturers have released new personal supercomputer systems based on Nvidia. The gadgets will allow scientists and developers to work on AI models at their desks.
- The Isaac GR00T N1 platform is expected to “supercharge humanoid robot development.” The project, which will be accessible to other developers, is being worked on by Nvidia, Walt Disney Co., and Google’s DeepMind.
- A wireless collaboration comprising businesses such as T-Mobile US Inc. and Cisco Systems Inc. Nvidia will help produce “AI-native” wireless network gear for upcoming 6G networks, the successor to today’s 5G.
This is a crucial time for Nvidia. Investors in 2025 are starting to wonder if the craze is sustainable after two years of stratospheric rise in both revenue and market value. When the Chinese firm DeepSeek announced earlier this year that it had created a competitive AI model with a fraction of the resources, these worries were highlighted.
The assertion made by DeepSeek raised questions about whether the rate of investment in AI computer infrastructure was justified. However, it was followed by pledges to continue investing this year from Nvidia’s largest clients, which include Microsoft and Amazon.com’s AWS.
According to a Bloomberg Intelligence research released Monday, the largest data centre operators, referred to as hyperscalers, are expected to spend $371 billion on AI infrastructure and computing resources in 2025, a 44 percent increase from the previous year. By 2032, that sum is predicted to have increased to $525 billion, at a quicker rate than experts had anticipated prior to DeepSeek’s viral popularity.
However, Nvidia’s stock is down 14% this year due to larger worries about trade disputes and a potential recession. By Tuesday, the shares had dropped 3.3 percent to $115.53 at the end of New York trade.
Shares of Mobileye Global Inc., a company that creates self-driving technology, were negatively impacted by the GM announcement. The stock dropped 3.5 percent to $14.44. In 2022, the business, which is mostly controlled by Intel Corp., became public.
The week-long GTC event is an opportunity to persuade the tech sector that Nvidia’s processors remain essential for artificial intelligence (AI), a field Huang anticipates expanding to a larger portion of the economy in what he has dubbed a new industrial revolution. According to Huang, the occasion has been dubbed the “Super Bowl of AI.”
In a note announcing the event, Chris Caso, an analyst at Wolfe Research, stated that the primary concern for Nvidia is whether AI capital expenditures would keep increasing in 2026. “We believe that cloud customers would prefer not to reduce their budgets for AI spending, but if the areas that fund those budgets suffer, that could put some pressure on capex.” “AI stocks have been down sharply on recession fears.”
Huang didn’t appear to allay investors’ worries on that issue. However, he presented a roadmap for next chips and introduced a novel technology that uses light waves and photonics, a blend of silicon and photonics.
In an effort to take advantage of yet another cutting-edge technology, Nvidia recently revealed plans for a quantum computing research hub in Boston.
As it strives to quickly update its processors, Santa Clara, California-based Nvidia has encountered some production issues. The release of Blackwell was delayed because several early versions needed to be fixed. Nvidia claims to have overcome such obstacles, but the company’s supply still falls short of demand. In order to push more chips out the door, it has raised spending, which will have an impact on margins this year.
Huang noted that last year, 1.3 million of Nvidia’s older-generation Hopper AI processors were purchased by the top four public cloud providers: Amazon, Microsoft, Alphabet Inc.’s Google, and Oracle Corp. According to him, the same group has purchased 3.6 million Blackwell AI chips so far in 2025.
Nvidia intends to offer Rubin Ultra a year later, after Vera Rubin’s release in the second part of next year. The American astronomer who is named after Vera Rubin is recognized for having contributed to the discovery of dark matter.
According to Huang, the next generation of processors will be called Feynman. Richard Feynman, an American theoretical physicist who contributed to quantum mechanics, is most likely the subject of the name.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.