Nvidia’s (NVDA) licensing deal with chip startup Groq (GROQ.PVT) shows how the tech giant is leveraging its massive cash pile to sustain its preeminence in the AI market.
Nvidia this week said it struck a non-exclusive deal with Groq to license its technology and hired the startup’s founder and CEO Jonathan Ross, its president, and other employees. CNBC reported the agreement to be worth $20 billion, marking Nvidia’s largest-ever deal. (The company declined a request for comment on the figure.)
Bernstein analyst Stacy Rasgon said in a note to clients Thursday that the Nvidia-Groq deal “appears strategic in nature for NVDA as they leverage their increasingly powerful balance sheet to maintain dominance in key areas.” Nvidia’s cash inflow climbed more than 30% from the previous year to $22 billion in its most recent quarter.
“This transaction is … essentially an acquisition of Groq without being labeled one (to avoid the regulators’ scrutiny),” added Hedgeye Risk Management analysts in a note Friday.
The move is just the latest in a string of AI deals by Nvidia, the world’s first $5 trillion company. The chipmaker’s investments in AI firms span the entire market, ranging from large language model developers such as OpenAI (OPAI.PVT) and xAI (XAAI.PVT) to “neoclouds” like Lambda (LAMD.PVT) and CoreWeave (CRWV), which specialize in AI services and compete with its Big Tech customers.
Nvidia has also invested in chipmakers Intel (INTC) and Enfabrica. The company made a failed attempt around 2020 to acquire British chip architecture designer Arm (ARM).
Nvidia’s wide-ranging investments — many of them in its own customers — have led to accusations that it’s involved in circular financing schemes reminiscent of the dot-com bubble. The company has vehemently denied those claims.
Groq, meanwhile, was looking to become one of Nvidia’s rivals.
Founded in 2016, Groq makes LPUs (language processing units) geared toward AI inferencing and marketed as alternatives to Nvidia’s GPUs (graphics processing units).
Training AI models involves teaching a model to learn patterns from large amounts of data, while “inferencing” refers to using that trained model to generate outputs. Both processes demand massive computing power from AI chips.
While Nvidia easily dominates the chip market for AI training, some analysts argue that Nvidia could soon see greater competition in the inference space. That’s because custom chips like Google’s (GOOG) TPUs (tensor processing units) — and arguably Groq’s chips called LPUs (language processing units) — may be better suited for certain tasks. LPUs, for instance, are faster and more energy efficient when used for certain models, utilizing a type of memory technology called SRAM within the chips. On the other hand, Nvidia GPUs rely on off-chip HBM made by companies like Micron (MU) and Samsung (005930.KS).
