At the 2025 GTC, Jensen Huang revealed a major scoop: NVIDIA will invest $1 billion in Nokia. Yes, that's the same Nokia, the Symbian mobile phone company that was wildly popular 20 years ago.
In his speech, Jensen Huang said that telecom networks are undergoing a major transformation from traditional architectures to AI-native systems, and NVIDIA's investment will accelerate this process. Thus, NVIDIA will partner with Nokia through investment to jointly create an AI platform for 6G networks, empowering traditional RAN networks with AI.
The specific form of the investment is that NVIDIA will subscribe to about 166 million new shares of Nokia at a price of $6.01 per share, which will give NVIDIA about a 2.9% stake in Nokia.
At the moment the partnership was announced, Nokia's stock price surged by 21%, the largest increase since 2013.
01 What is AI-RAN?
RAN stands for Radio Access Network, and AI-RAN is a new network architecture that embeds AI computing power directly into wireless base stations. Traditional RAN systems are mainly responsible for transmitting data between base stations and mobile devices, while AI-RAN adds edge computing and intelligent processing capabilities on top of that.
This allows base stations to apply AI algorithms to optimize spectrum utilization and energy efficiency, improve overall network performance, and also use idle RAN assets to host edge AI services, creating new revenue streams for operators.
Operators can run AI applications directly at the base station site without having to send all data back to central data centers for processing, greatly reducing the network burden.
Jensen Huang gave an example: almost 50% of ChatGPT users access it via mobile devices. Moreover, ChatGPT's monthly mobile downloads exceed 40 million. In an era of explosive AI application growth, traditional RAN systems cannot cope with generative AI and agent-dominated mobile networks.
AI-RAN, by providing distributed AI inference capabilities at the edge, enables upcoming AI applications such as agents and chatbots to respond faster. At the same time, AI-RAN is also preparing for integrated sensing and communication applications in the 6G era.
Jensen Huang cited a forecast from analyst firm Omdia, which predicts that the RAN market will exceed $200 billions cumulatively by 2030, with the AI-RAN segment becoming the fastest-growing subfield.
Nokia President and CEO Pekka Lundmark said in a joint statement that this partnership will put AI data centers in everyone's pocket, enabling a fundamental redesign from 5G to 6G.
He specifically mentioned that Nokia is working with three different types of companies: NVIDIA, Dell, and T-Mobile. T-Mobile, as one of the first partners, will begin field testing of AI-RAN technology in 2026, focusing on verifying performance and efficiency improvements. Pekka said this testing will provide valuable data for 6G innovation and help operators build intelligent networks that meet AI demands.
Based on AI-RAN, NVIDIA released a new product called Aerial RAN Computer Pro (ARC-Pro), an accelerated computing platform prepared for 6G. Its core hardware configuration includes both NVIDIA's Grace CPU and Blackwell GPU.

This platform runs on NVIDIA CUDA, allowing RAN software to be directly embedded into the CUDA technology stack. Therefore, it can not only handle traditional radio access network functions but also run mainstream AI applications simultaneously. This is NVIDIA's core approach to realizing the "AI" in AI-RAN.
Given CUDA's long history, the biggest advantage of this platform is actually its programmability. Furthermore, Jensen Huang announced that the Aerial software framework will be open-sourced, expected to be released on GitHub under the Apache 2.0 license starting December 2025.
The main difference between ARC-Pro and its predecessor ARC lies in deployment location and application scenarios. The previous ARC was mainly used for centralized cloud RAN implementations, while ARC-Pro can be deployed directly at the base station site, enabling true edge computing capabilities.
NVIDIA's head of telecom business, Ronnie Vashista, said that in the past, RAN and AI required two different sets of hardware, but ARC-Pro can dynamically allocate computing resources according to network needs, prioritizing radio access functions or running AI inference tasks during idle periods.

ARC-Pro also integrates the NVIDIA AI Aerial platform, a complete software stack including CUDA-accelerated RAN software, Aerial Omniverse digital twin tools, and the new Aerial Framework. The Aerial Framework can convert Python code into high-performance CUDA code to run on the ARC-Pro platform. In addition, the platform supports AI-driven neural network models for advanced channel estimation.
Jensen Huang said that telecommunications is the digital nervous system of the economy and security. The collaboration with Nokia and the telecom ecosystem will ignite this revolution, helping operators build intelligent, adaptive networks and define the next generation of global connectivity.
02 Looking at 2025, NVIDIA really invested a lot of money.
On September 22, NVIDIA reached a partnership with OpenAI, planning to gradually invest $100 billions in OpenAI to accelerate its infrastructure construction.
Jensen Huang said that OpenAI had actually sought NVIDIA's investment a long time ago, but the company had limited funds at the time. He joked that they were too poor back then and should have given all their money to OpenAI.
Jensen Huang believes that AI inference growth is not 100 times or 1,000 times, but 1 billion times. Moreover, this partnership is not limited to hardware but also includes software optimization to ensure OpenAI can efficiently utilize NVIDIA's systems.
This may be because after learning about OpenAI's cooperation with AMD, he was worried that OpenAI would abandon CUDA. If the world's largest AI foundational model stops using CUDA, it would be reasonable for other large model vendors to follow OpenAI's lead.
In the BG2 podcast, Jensen Huang predicted that OpenAI is likely to become the next trillion-dollar company, with a growth rate that will set an industry record. He refuted the AI bubble theory, pointing out that global annual capital expenditure on AI infrastructure will reach $5 trillions.

It is precisely because of this investment that OpenAI announced the completion of its corporate capital restructuring on October 29. The company was split into two parts: a non-profit foundation and a for-profit company.
The non-profit foundation will legally control the for-profit part and must also consider the public interest. However, it can still freely raise funds or acquire companies. The foundation will own 26% of the for-profit company and hold a warrant. If the company continues to grow, the foundation can obtain additional shares.
In addition to OpenAI, NVIDIA also invested in Musk's xAI in 2025. The current financing round for this company has increased to $20 billions. About $7.5 billions is raised through equity, and up to $12.5 billions through a special purpose vehicle (SPV) via debt financing.
The way this SPV operates is that it will use the funds raised to purchase NVIDIA's high-performance processors and then lease these processors to xAI for use.
These processors will be used for xAI's Colossus 2 project. The first-generation Colossus is xAI's supercomputing data center in Memphis, Tennessee. The initial Colossus project has already deployed 100,000 NVIDIA H100 GPUs, making it one of the largest AI training clusters in the world. Now, xAI is building Colossus 2, planning to expand the number of GPUs to several hundred thousand or more.
On September 18, NVIDIA also announced a $5 billion investment in Intel and established a deep strategic partnership. NVIDIA will subscribe to newly issued Intel common stock at $23.28 per share, with a total investment of $5 billion. After the transaction, NVIDIA will hold about 4% of Intel's shares, becoming an important strategic investor.
03 Of course, Jensen Huang said much more at this GTC.
For example, NVIDIA launched several open-source AI model families, including Nemotron for digital AI, Cosmos for physical AI, Isaac GR00T for robotics, and Clara for biomedical AI.
At the same time, Jensen Huang launched the DRIVE AGX Hyperion 10 autonomous driving development platform. This is a platform for Level 4 autonomous driving, integrating NVIDIA computing chips and a complete sensor suite, including LiDAR, cameras, and radar.
NVIDIA also launched the Halos certification program, the industry's first system for evaluating and certifying the safety of physical AI, specifically for autonomous vehicles and robotics technology.
The core of the Halos certification program is the Halos AI system, the first laboratory in the industry to be recognized by the ANSI certification committee. ANSI is the American National Standards Institute, and its certification is highly authoritative and credible.
The task of this system is to use NVIDIA's physical AI to detect whether autonomous driving systems meet standards. Companies such as AUMOVIO, Bosch, Nuro, and Wayve are among the first members of the Halos AI system testing laboratories.
To promote Level 4 autonomous driving, NVIDIA released a multimodal autonomous driving dataset from 25 countries, containing 1,700 hours of camera, radar, and LiDAR data.
Jensen Huang said the value of this dataset lies in its diversity and scale, covering different road conditions, traffic rules, and driving cultures, providing a foundation for training more general-purpose autonomous driving systems.
But Jensen Huang's blueprint goes far beyond this.
At GTC, he announced a series of collaborations with U.S. government laboratories and leading enterprises, aiming to build America's AI infrastructure. Jensen Huang said we are at the dawn of the AI industrial revolution, which will define the future of every industry and country.
The highlight of this collaboration is with the U.S. Department of Energy. NVIDIA is helping the Department of Energy build two supercomputing centers, one at Argonne National Laboratory and the other at Los Alamos National Laboratory.
Argonne Lab will receive a supercomputer called Solstice, equipped with 100,000 NVIDIA Blackwell GPUs. What does 100,000 GPUs mean? This will be the largest AI supercomputer in the Department of Energy's history. There is also a system called Equinox, equipped with 10,000 Blackwell GPUs, expected to be operational in 2026. Together, these two systems will provide 2,200 exaflops of AI computing power.
Argonne Lab Director Paul Kearns said these systems will redefine performance, scalability, and scientific potential. What will they use this computing power for? From materials science to climate modeling, from quantum computing to nuclear weapons simulation, all require this level of computational power.
In addition to government laboratories, NVIDIA has also built an AI factory research center in Virginia. What makes this center special is that it is not just a data center, but an experimental site. NVIDIA will test something called Omniverse DSX here, which is a blueprint for building gigawatt-level AI factories.

An ordinary data center may only require tens of megawatts of power, while a gigawatt is equivalent to the output of a medium-sized nuclear power plant.
The core idea of the Omniverse DSX blueprint is to turn the AI factory into a self-learning system. AI agents will continuously monitor power, cooling, and workloads, automatically adjusting parameters to improve efficiency. For example, when the grid load is high, the system can automatically reduce power consumption or switch to battery storage power supply.
This kind of intelligent management is crucial for gigawatt-level facilities, as electricity and cooling costs can be astronomical.
This vision is grand, and Jensen Huang said it will take him three years to realize it. AI-RAN testing will not begin until 2026, autonomous vehicles based on DRIVE AGX Hyperion 10 will not hit the road until 2027, and the Department of Energy's supercomputers will also be operational in 2027.
NVIDIA holds CUDA as its trump card, controlling the de facto standard for AI computing. From training to inference, from data centers to edge devices, from autonomous driving to biomedicine, NVIDIA's GPUs are everywhere. The investments and partnerships announced at this GTC further cement this position.


