AI Datacenters and Storage Opportunities
According to Bernstein, the opportunity for storage in AI datacenters is limited compared to servers, especially since large language models (LLMs) require less storage.
Insights from a Recent Webinar
In a recent webinar featuring David Hall, former VP of Infrastructure at Lambda, Bernstein explored trends in the AI cloud market.
Hall estimated that storage costs account for only 8-12% of the expenses related to a model-training GPU cluster, reinforcing Bernstein's belief that storage will be less significant than servers in AI infrastructure.
Storage Demand in AI
While models dealing with images and videos require more storage due to larger datasets, the overall storage demand in AI datacenters still lags behind other components, particularly servers.
Bernstein highlighted the importance of balancing features against costs when selecting a storage provider. Lambda primarily collaborates with providers like DDN, Vast, and WEKA, opting out of solutions from NetApp, Dell, and Pure Storage due to superior features from other vendors.
GPU Lifespan and Upgrades
During the discussion, Hall mentioned that GPUs today have lifespans of approximately 7-9 years, indicating that fully depreciated chips can still be valuable. He also noted that the new Blackwell GPUs offer performance improvements of 60-200% at a 30-40% price increase compared to Nvidia’s Hopper. However, not all applications require the latest technology; many tasks can be efficiently handled with older-generation GPUs like Ampere or P-series models.
Nvidia’s Dominance
Bernstein emphasized Nvidia's (NASDAQ: NVDA) stronghold in the software layer through CUDA and CuDNN, which are key differentiators in the AI landscape. Although emerging startups may pose a challenge to Nvidia's market share with custom GPUs, Bernstein asserts that software remains crucial for success, stating that the software layer will be the most vital component determining the competitive GPU landscape.
Comments (0)