While Nvidia’s graphics processing units dominate headlines, the company has quietly built a $31 billion networking division that now rivals traditional industry giants and forms the critical backbone of the AI revolution. According to the company’s most recent earnings report, this segment generated $11 billion in revenue in a single quarter ending in early 2026, representing a staggering 267% year-over-year increase. This explosive growth stems from CEO Jensen Huang’s strategic 2020 acquisition of Mellanox, a move that positioned Nvidia to control both the computing and networking layers of modern artificial intelligence infrastructure.
Nvidia’s Networking Business Transforms Data Center Economics
The networking division’s financial performance has surprised industry observers with its scale and velocity. In just a few years, networking has become Nvidia’s second-largest revenue driver behind its core compute business. To provide context, the $11 billion quarterly revenue figure surpasses the entire networking business of Cisco Systems, a company that has dominated the networking industry for decades. Kevin Cook, a senior equity strategist at Zacks Investment Research, noted this remarkable comparison during an interview in March 2026, stating that Nvidia’s networking business achieves in one quarter what Cisco’s equivalent segment typically generates in a full year.
This growth trajectory demonstrates how artificial intelligence has fundamentally altered data center architecture. Traditional data centers prioritized general-purpose computing with relatively simple networking requirements. Modern AI factories, however, require massive parallel processing across thousands of interconnected GPUs, creating unprecedented demands on networking infrastructure. Nvidia’s technology portfolio addresses this exact challenge through several key components:
- NVLink: Enables high-speed communication between GPUs within a single server rack
- InfiniBand Switches: Provides low-latency, high-bandwidth connections across data centers
- Spectrum-X Ethernet Platform: Optimizes ethernet specifically for AI workloads
- Co-packaged Optics: Integrates optical components directly with switches for efficiency
The Mellanox Acquisition: Jensen Huang’s Strategic Masterstroke
Nvidia’s networking ascendancy traces directly to its $7 billion acquisition of Mellanox in 2020. Founded in Israel in 1999, Mellanox had established itself as a leader in high-performance networking technology, particularly in the InfiniBand market. At the time of the acquisition, many industry observers viewed the move as complementary but secondary to Nvidia’s core GPU business. Six years later, the strategic rationale has become unmistakably clear.
Creating the Complete AI Infrastructure Stack
Kevin Deierling, senior vice president of networking at Nvidia who joined through the Mellanox acquisition, explained the transformation in a March 2026 interview. “When Jensen bought Mellanox, he saw that was the missing piece to make GPUs a complete package,” Deierling stated, echoing analysis from Zacks Investment Research. The integration allows Nvidia to offer optimized, full-stack solutions rather than individual components, creating significant competitive advantages in performance and reliability.
Deierling further elaborated on how perceptions of networking have evolved. “People think of networking as just, ‘I got a printer, and I need to connect to it,'” he remarked. “Jensen said this the first day when he acquired us: the data center is the new unit of computing. Networking is a lot more than just moving smaller amounts of data between compute nodes—it’s actually a foundation.” This philosophical shift reflects how AI has transformed networking from peripheral connectivity to central computational architecture.
Networking as the Backbone of AI Factories
The concept of “AI factories” represents perhaps the most significant architectural shift in computing since the advent of cloud infrastructure. These specialized data centers, designed specifically for training and running artificial intelligence models, require networking capabilities that traditional data centers never needed. In AI training, thousands of GPUs must communicate constantly during parallel processing, making network latency and bandwidth critical limiting factors.
Deierling provided a compelling analogy about this transformation: “It’s no longer a peripheral to connect the printer or some other slow I/O device. It’s fundamental to the computer. In the old days, we had what was called the backplane inside the computer. Today, the network is the backplane of the AI factory, and it’s super important.” This architectural shift explains why Nvidia’s networking business has grown alongside its GPU division rather than simply complementing it.
| Period | Revenue | Year-over-Year Growth | Key Driver |
|---|---|---|---|
| Q1 2025 | $3.0B | +85% | Early AI infrastructure build-out |
| Q4 2025 | $8.2B | +212% | Major cloud provider expansions |
| Q1 2026 | $11.0B | +267% | Full AI factory deployments |
| Full Year 2025 | $31.0B | +280% | Market leadership establishment |
Competitive Landscape and Market Implications
Nvidia’s networking success creates significant challenges for traditional networking companies while reshaping the broader technology ecosystem. The company’s approach differs fundamentally from conventional networking vendors in several respects. First, Nvidia sells primarily through partners rather than directly, leveraging existing relationships with major cloud providers and system integrators. Second, the company focuses exclusively on high-performance networking for AI and scientific computing rather than general enterprise networking.
This specialization creates both advantages and limitations. While Nvidia dominates the AI networking segment, traditional networking companies continue to serve the broader enterprise market. However, as AI workloads become more pervasive across industries, the boundary between specialized and general networking may blur. Industry analysts monitoring the situation in early 2026 note that Nvidia’s success has prompted increased investment in AI-optimized networking from competitors, though none have yet matched the company’s integrated compute-networking approach.
Recent Innovations and Future Trajectory
During his keynote address at Nvidia’s GTC technology conference on March 16, 2026, Jensen Huang announced significant updates to the company’s networking portfolio. These included the new Rubin platform with six specialized chips for AI supercomputers, an enhanced Inference Context Memory Storage platform, and more efficient Spectrum-X Ethernet Photonics switches. These announcements demonstrate Nvidia’s continued commitment to advancing both compute and networking capabilities in tandem.
The technological roadmap suggests networking will remain integral to Nvidia’s strategy. As AI models grow larger and more complex, networking requirements will become even more demanding. Future systems will likely require even tighter integration between computing and networking elements, potentially through technologies like silicon photonics and advanced packaging. Nvidia’s dual expertise in both domains positions the company uniquely to address these emerging challenges.
Conclusion
Nvidia’s networking business has evolved from a strategic acquisition to a foundational pillar of the company’s AI infrastructure strategy. The division’s remarkable growth—from zero to $31 billion in annual revenue in just six years—demonstrates how artificial intelligence has transformed both computing architecture and business models. While Nvidia’s chips continue to receive most public attention, the networking business now represents a critical competitive moat and revenue stream that supports the company’s broader ambitions. As AI continues to reshape technology infrastructure, Nvidia’s integrated approach to compute and networking positions the company to maintain leadership in the evolving landscape of artificial intelligence infrastructure through 2026 and beyond.
FAQs
Q1: How large is Nvidia’s networking business compared to its chip division?
The networking business generated $11 billion in revenue in the first quarter of 2026, making it the company’s second-largest division behind compute. While smaller than the chip business, it has grown significantly faster recently.
Q2: What was the key acquisition that built Nvidia’s networking capabilities?
Nvidia acquired Mellanox in 2020 for $7 billion. Mellanox was an Israeli networking company founded in 1999 that specialized in high-performance InfiniBand technology.
Q3: How does Nvidia’s networking revenue compare to traditional networking companies?
Nvidia’s networking business generated more revenue in one quarter ($11 billion in Q1 2026) than Cisco’s entire networking business typically generates in a full year, according to equity analysts.
Q4: What technologies are included in Nvidia’s networking portfolio?
The portfolio includes NVLink for GPU communication, InfiniBand switches, Spectrum-X Ethernet for AI networking, co-packaged optics, and various in-network computing platforms.
Q5: Why is networking so important for artificial intelligence?
AI training requires thousands of GPUs to communicate constantly during parallel processing. Network latency and bandwidth directly impact training times, making optimized networking critical for AI factory efficiency.
Updated insights and analysis added for better clarity.
This article was produced with AI assistance and reviewed by our editorial team for accuracy and quality.
