Nvidia’s GTC 2026 conference opened in San Jose, featuring technology announcements by CEO Jensen Huang and investor news. The company estimates it will reach cumulative revenues of one trillion dollars from AI chip sales within about two years, amid unprecedented demand for computing power.
The updated forecast reflects a significant acceleration compared to previous estimates and indicates the depth of the market emerging around AI infrastructure. According to the CEO, demand for chips continues to rise rapidly, supply is struggling to keep up, and prices remain high, a situation that strengthens Nvidia’s position as a critical infrastructure provider for the AI era.
At the heart of the announcements was the Vera Rubin platform, which has entered commercial production and is defined as the next generation of AI infrastructure. The platform includes seven new chips designed to work together as a single system, supporting all stages of AI model workflows, from training to real-time operation of autonomous agents.
The system is built around the concept of "AI factories" – massive-scale data centers where hundreds of thousands of accelerators operate together. Alongside this, new rack-server systems were also presented, including the integration of hundreds of processors and dedicated processing units, aimed at improving performance and reducing energy consumption.
One of the main innovations is the Groq 3 LPU chip, designed to accelerate the inference stage, meaning the real-time operation of existing models. According to the company, its integration into systems allows for improvements of tens of percent and more in throughput, enhancing the ability to operate AI agents on a large scale.
There is also Israeli pride. The platform’s core communication components, including NVLink 6 switches, ConnectX-9 cards, BlueField-4 processors, and advanced communication systems, were developed in the company’s R&D centers in Israel and form a critical layer in connecting thousands of chips and operating systems at the scale of "AI factories." Additionally, the new BlueField-4 STX storage architecture, designed to handle data loads from AI agents, was developed in Israel.
At the same time, Nvidia is expanding its activities into the software layer. The company introduced NemoClaw, a system that enables building AI agents while maintaining security and privacy, alongside a series of new open models. In addition, an international coalition for open-source model development was announced, as part of efforts to expand the ecosystem around its technologies.
The company also presented Dynamo 1.0, a system for managing large-scale inference workloads, intended to serve as an infrastructure layer for AI centers, similar to an operating system that manages resources and optimizes performance. Alongside hardware and infrastructure, there is a clear focus on integrating processing, communication, and storage. Advanced communication components and new storage systems are designed to address bottlenecks that occur as models grow in scale and generate massive volumes of data. A significant portion of these developments was carried out at Nvidia’s research centers in Israel.
One of the more notable announcements involved expanding AI activity beyond Earth. Nvidia presented adaptations of its platforms for space operation, including data centers in orbit around the planet, designed to enable data processing directly where it is collected.
In addition, collaborations with telecommunications and automotive companies were announced, alongside expanding activities in robotics and physical AI. The company aims to integrate AI capabilities not only in the cloud but also in cellular networks, vehicles, and autonomous systems.