An artificial intelligence (AI) infrastructure project worth $30 billion has been joined by semiconductor company Nvidia and Elon Musk’s xAI. Both Nvidia and Elon Musk’s xAI said on Wednesday that they had partnered with Microsoft, investment company MGX, and BlackRock to develop AI infrastructure in the United States as a global struggle for control of the developing technology intensifies.
The two high-profile sign-ups were announced Wednesday, March 19, by the AI Infrastructure Fund, which is supported by BlackRock, Microsoft, and Abu Dhabi AI investment organization MGX. The fund stated that its ultimate objective was to generate up to $100 billion for the development of artificial intelligence (AI).
Jensen Huang, the founder and CEO of Nvidia, stated in the announcement that “every company and nation that wants to achieve economic growth and unlock solutions to the world’s greatest challenges will benefit from the global buildout of AI infrastructure.”
“AI factories based on Nvidia full-stack AI infrastructure will transform data into intelligence that will boost all industries and enable society to make unthinkable strides,” he continued.
The group is one of the largest attempts to finance data centres and energy infrastructure required to run AI applications like ChatGPT. It was established last year with the intention of initially investing more than $30 billion in AI-related projects.
The announcements follow US President Donald Trump’s announcement two months ago of Stargate, a private sector AI infrastructure effort supported by Oracle, SoftBank Group, and OpenAI that aims to raise up to $500 billion.
$100 billion has been pledged by investors for deployment right now, with the remaining funds anticipated during the following four years.
The group, which now includes BlackRock’s Global Infrastructure Partners, changed its name to AI Infrastructure Partnership on Wednesday. The position of technical advisor Nvidia will remain in place.
Large-scale data processing and AI model training demand a lot of computing power, which raises energy consumption. The need for specialized data centres is rising as a result of IT businesses deploying thousands of processors in clusters to satisfy the demands.
The consortium has been trying to generate up to $100 billion, including debt funding, from investors, asset owners, and enterprises to pay for the computing and power requirements.
The firm stated that “AIP has attracted significant capital and partner interest since its inception in September,” but it did not reveal how much money has been raised yet.
According to the statement, GE Vernova and utility business NextEra Energy will also join the group. The renewable energy company will focus on high-efficiency energy solutions and supply-chain planning.
According to AIP, the Organization for Economic Cooperation and Development and US partners will also be the focus of its investments.
The fund was created by Microsoft and Blackrock last year with the goal of raising funds to build data centres and identify power sources for such establishments.
Huang made the statement a day after stating to guests of the Nvidia developer conference that the present direction of AI, which is moving toward agents and reasoning models, requires enormous computational capacity.
At what was dubbed “AI Woodstock,” the CEO declared that “AI is going through an inflection point,” adding that “the amount of computation necessary to train those models, and to inference those models, has grown tremendously” as a result of the move to agentic and reasoning models.
Though they respond instantly, traditional big language models require a lot less processing power than AI agents and reasoning models. Because reasoning models reason back and forth among themselves before answering, which often takes longer, they require a lot more power.
“We now have to compute ten times faster in order to keep the model responsive, so that we don’t lose our patience waiting for it to think,” Huang stated. “There is no doubt that the amount of computation we must perform is 100 times greater.”
After a flurry of headlines in January about the Chinese AI firm DeepSeek, Huang was arguing that the AI business will still need a lot of Nvidia GPUs, as PYMNTS pointed out. Instead of employing the tens of thousands of chips that businesses like OpenAI usually use, the company claimed to have only used 2,000 slower Nvidia H800 chips to train its high-performing foundation AI model.
There are other multi-billion dollar initiatives to support AI infrastructure projects outside the BlackRock/Microsoft fund. SoftBank and OpenAI announced their “Stargate” initiative in January, which aims to invest up to $100 billion in the development of AI infrastructure.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.