Lambda intends to significantly expand it’s on-demand cloud platform, designed for AI and machine learning workloads. The newly secured funding will be used to purchase Nvidia H200 chips and the latest Blackwell AI chips, including the B200 and GB200 models.
In November last year, Nvidia announced that Lambda Labs would be among the first cloud service providers to offer customers access to H200 chips. Today’s news highlights the deepening collaboration between Lambda and Nvidia, with the latest chip upgrades further strengthening their partnership.
Lambda is following the footsteps of its “big brother,” CoreWeave, a leading provider of cloud infrastructure for AI. In August last year, CoreWeave raised $2.3 billion in debt financing, backed by Nvidia accelerators, which raised its valuation to $8 billion. Today, CoreWeave is in discussions for further funding at a $16 billion valuation, stating that the demand for AI data centers has been vastly underestimated.