Nvidia’s Record-Breaking Revenue: What Investors Need to Know

Nvidia’s Record-Breaking Revenue: What Investors Need to Know
7:36

Nvidia surpassed its revenue guidance, reaching $2 billion, and is set to double revenues this year. With strong demand for AI training and the upcoming Blackwell platform, Nvidia is poised for continued growth in AI markets. Learn more from this piece from Shaun Krom, our CIO at EasyAssetManagement.


Stock Outlook
Once again, Nvidia surpassed its revenue guidance for the quarter, achieving $2 billion, continuing its trend since the start of the current LLM investment cycle. All indicators confirm that Nvidia is on track to double its revenues this year. For example, TSMC indicated that their AI revenues would grow by 164% this year, reflecting the AI investments made by Tesla and other hyperscalers.

New call-to-action
The bullish case for Nvidia investors hinges on the fact that most current demand comes from the AI training market. Inference, the deployment of models to assist with real-world tasks, is expected to become the larger semiconductor market. Nvidia estimates that only 40% of demand over the last twelve months has been for inference, suggesting that we are only in the first phase of the AI GPU buildout, with more growth expected as capacity constraints like CoWoS (an advanced packaging technology developed by TSMC that allows multiple chips to be integrated into a single package, enhancing performance and reducing latency) and HBM (a type of memory that offers significantly higher bandwidth compared to traditional memory, enabling faster data transfer and improved efficiency in high-performance computing applications) are resolved by Nvidia’s supply chain.

We can also anticipate continuous upgrade cycles of training capacity to the latest state-of-the-art GPUs, as AI companies seek an edge in retraining their models. Being a few months faster to market with a leading LLM can differentiate between dominating the industry, like OpenAI, or being a marginal player. Under this scenario, previous-generation GPU training capacity can be redirected to the inference market.

Additionally, if the latest GPUs offer significant advantages in response times, upgrading these would dramatically improve user experience, similar to how users flocked to Google Search. Older GPUs can still support smaller language models addressing narrower tasks, such as powering an ecommerce store.

Despite Nvidia’s tremendous growth, we still see the prospect of a buildout in the inference market. Compelling arguments suggest that at least part of the installed capacity will transition to the next generation Blackwell architecture.

Crucially for Nvidia, this inference buildout is happening. Microsoft’s head of Cloud AI recently discussed at JP Morgan how clients are deploying AI in real-world use cases:

"We look at demand in three aspects. First is customer demand, both from those in pilot phases and those transitioning to at-scale deployment. Real Madrid, for example, used AI to enhance their fan engagement platform, increasing their fan profile base by 400% and top-line revenue by 30%. Volvo digitized all of their invoices using cognitive services and generative AI, saving 850 manual hours per month. We also see increasing AI specializations among our ecosystem partners, with Azure AI services having 53,000 active customers, one-third of whom are new. Customer commitment is another key measure, with $100 million-plus contracts increasing 80% year-over-year."

Clients are also transitioning to larger cluster sizes. Nvidia’s CFO highlighted the importance of these clusters:

“Large clusters like those built by Meta and Tesla are essential AI production infrastructure, what we refer to as AI factories… In Q1, we worked with over 100 customers building AI factories, with some reaching 100,000 GPUs.”

These clients leverage Nvidia’s full stack of GPU, networking, and software solutions. Building a great GPU isn’t sufficient to compete with Nvidia. As a result, Nvidia anticipates demand continuing to outstrip supply in the coming year:

“While supply for H100 grew, we are still constrained on H200. Blackwell is in full production, and we are working to bring up our system and cloud partners for global availability later this year. Demand for H200 and Blackwell exceeds supply, and we expect this trend to continue into next year. Blackwell will be available in over 100 OEM and ODM systems at launch, supporting fast and broad adoption across various customer types and data center environments. Initial customers include Amazon, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and xAi.”

xAI, Elon Musk’s latest startup, aims to produce a truly open-source and truthful AI to compete with OpenAI’s ChatGPT and Google’s Gemini. Meta’s Zuckerberg is also pursuing open-source models with Llama, leveraging the global developer community.

Nvidia’s founder Jensen Huang discussed the Blackwell rollout and demand outlook:

“Our production shipments will start in Q2 and ramp in Q3, with data centers expected to be operational in Q4. Blackwell is a platform, not just a GPU, and it supports various configurations… Existing data centers shipping Hoppers can easily transition to H200 and B100. Blackwell systems are backward compatible, and our software stack will run seamlessly on Blackwell. We see increasing demand for Hopper and expect supply constraints to continue as we transition to H200 and Blackwell.”

Competitors argue that the lower pricing of their GPUs offers better value. However, Nvidia’s GPUs provide the best performance, lowest latency, and fastest time to market. Despite higher capex, customers benefit from Nvidia’s continuous R&D improvements.

Nvidia’s Ethernet Pivot
Nvidia has traditionally promoted InfiniBand as its preferred networking technology for connecting datacenter GPUs. However, Ethernet, which is more user-friendly, is closing the performance gap. Arista Network (a company that Easy Asset Management is invested in through our bundles and unit trusts) CEO highlighted Ethernet’s advancements in AI datacenters:

“In a recent blog from a major Cloud and AI Titan customer, Arista was recognized for building a 24,000 node GPU cluster using our flagship 7800 AI Spine. Ethernet has proven to offer at least a 10% improvement in job completion performance across all packet sizes compared to InfiniBand… We expect AI networking to continue evolving, with Ethernet emerging as critical infrastructure in both front-end and back-end AI datacenters.”

As a result, Nvidia is integrating Ethernet into their platform. Nvidia’s CFO discussed their Ethernet business:

“In the first quarter, we started shipping our new Spectrum-X Ethernet networking solution, optimized for AI… Spectrum-X delivers 1.6x higher networking performance for AI processing compared with traditional Ethernet. It is ramping up with multiple customers, including a massive 100,000 GPU cluster… Spectrum-X opens a new market for Nvidia networking, and we expect it to become a multibillion-dollar product line within a year.”

It looks to us that Nvidia will continue to lead to AI revolution for some time to come and given this growth the stock doesn’t appear to be unreasonably expensive

New call-to-action


Want to know more about the latest news?

What Could Impact NVIDIA's Stock Price?

EasyAssetManagement Insights: Which Shares Are on the Rise?

29 Inspirational Quotes from Market Legends
Gaming ETFs: Big CEOs Are Interested in This Space, Are You?

Is It Too Late to Invest in Tech With This ETF?



Any opinions, news, research, reports, analyses, prices, or other information contained within this research is provided by an external contributor as general market commentary and does not constitute investment advice for the purposes of the Financial Advisory and Intermediary Services Act, 2002. First World Trader (Pty) Ltd t/a EasyEquities (“EasyEquities”) does not warrant the correctness, accuracy, timeliness, reliability or completeness of any information (i) contained within this research and (ii) received from third party data providers. You must rely solely upon your own judgment in all aspects of your investment and/or trading decisions and all investments and/or trades are made at your own risk. EasyEquities (including any of their employees) will not accept any liability for any direct or indirect loss or damage, including without limitation, any loss of profit, which may arise directly or indirectly from use of or reliance on the market commentary. The content contained within is subject to change at any time without notice.

 

Any opinions, news, research, reports, analyses, prices, or other information contained within this research is provided by an employee of EasyEquities an authorised FSP (FSP no 22588) as general market commentary and does not constitute investment advice for the purposes of the Financial Advisory and Intermediary Services Act, 2002. First World Trader (Pty) Ltd t/a EasyEquities (“EasyEquities”) does not warrant the correctness, accuracy, timeliness, reliability or completeness of any information (i) contained within this research and (ii) received from third party data providers. You must rely solely upon your own judgment in all aspects of your investment and/or trading decisions and all investments and/or trades are made at your own risk. EasyEquities (including any of their employees) will not accept any liability for any direct or indirect loss or damage, including without limitation, any loss of profit, which may arise directly or indirectly from use of or reliance on the market commentary. The content contained within is subject to change at any time without notice.

Previous Blog

Next Blog

Let Us Help You, Help Yourself

From how-to’s to whos-whos you’ll find a bunch of interesting and helpful stuff in our collection of videos. Our knowledge base is jam packed with answers to all the questions you can think of.