Groq Secures $640 Million in Funding to Challenge Nvidia in the AI Chip Industry

Groq has successfully raised $640 million in a Series D funding round. This funding was led by investors such as BlackRock, with participation from Neuberger Berman, Type One Ventures, Cisco, KDDI and the Samsung Catalyst Fund. This investment brings Groq’s total funding to over $1 billion and also boosts the company’s valuation to $2.8 billion.

Groq Secures $640 Million in Funding to Challenge Nvidia in the AI Chip Industry

Also Read: Neuralink Implants Second Patient with Brain Chip

The company secured $640 million in Series D funding surpassing its initial target of $300 million. The funding round was led by BlackRock with participation from major players like Neuberger Berman, Type One Ventures, Cisco, KDDI and Samsung Catalyst Fund.

The new funding round elevates Groq’s valuation to $2.8 billion, more than doubling its previous valuation of $1 billion in April 2021.

Meta’s chief AI scientist, Yann LeCun will serve as a technical advisor and Stuart Pann, former head of Intel’s foundry business and ex-CIO at HP has joined as the COO.

Founded in 2016, Groq emerged from stealth mode with a concept, the Language Processing Unit (LPU) inference engine.

The company claims that its LPUs can run existing generative AI models such as those similar to OpenAI’s ChatGPT and GPT-4o, at ten times the speed and one-tenth the energy consumption of traditional processors.

LPUs are designed to accelerate AI workloads by providing faster and more energy-efficient processing capabilities compared to conventional graphics processing units (GPUs).

It provides a developer platform called GroqCloud, which hosts various open models like Meta’s Llama 3.1, Google’s Gemma, OpenAI’s Whisper and Mistral’s Mixtral.

This platform also provides an API for customers to use chips in cloud instances. GroqCloud has attracted over 356,000 developers with a huge portion from large enterprises including 75% of the Fortune 100 companies.

Jonathan Ross founded Groq in 2016 with the vision of creating AI chips explicitly designed for inference. Inference refers to the AI process of applying learned knowledge to new situations such as recognizing images or generating responses.

Initially the company struggled to gain traction and faced near-collapse several times including a critical moment in 2019 when the startup was just a month away from running out of funds.

The release of ChatGPT by OpenAI in late 2022 ignited a global AI frenzy, increasing demand for high-performance AI inference.

This was the turning point for the company, as the previously niche market for fast inference suddenly surged. In February 2024, during a critical demonstration in Oslo, the AI chatbot faced unexpected lag due to a viral tweet about its performance.

On August 5, 2024, the company announced a $640 million Series D funding round, increasing its valuation from $1.1 billion in 2021 to $2.8 billion.

This round was led by BlackRock Private Equity Partners and included participation from Cisco Investments, Samsung Catalyst Fund and others.

The funding will be used to scale up operations, expand its team and accelerate the development of its next-generation Language Processing Units (LPUs).

The company also plans to deploy over 100,000 additional LPUs into GroqCloud, its cloud-based platform for AI inference.

Also Read: Turkey Blocks Instagram Amid Censorship Controversy

LPUs are designed from the ground up for AI inference. Ross claims that Groq’s LPUs are four times faster, five times cheaper and three times more energy-efficient than Nvidia’s GPUs in inference tasks.

LPUs feature a software-first design that optimizes them specifically for AI applications. This tailored architecture allows Groq to offer speeds and efficiency in AI inference.

Nvidia, with a market cap of $3 trillion and a share of the AI chip market, faces competition from emerging players like Groq.

Nvidia’s GPUs was originally designed for graphics-intensive tasks and have become a cornerstone for AI due to their versatile computational power.

Since launching GroqCloud, the company has seen explosive growth with 350,000 developers now using the platform.

Initially offering free access, the company recently began charging for its services and a quarter of its customer base is already seeking more compute power.

Nvidia currently controls 70% to 95% of the AI chip market, with a focus on releasing new AI chip architectures annually to maintain its market leadership.

The company competes with major tech companies like Amazon, Google and Microsoft, all of which are developing custom chips for AI workloads.

Amazon provides AI-focused processors such as Trainium, Inferentia and Graviton through AWS. Google provides Tensor Processing Units (TPUs) and is developing the Axion chip.

Microsoft introduced Azure instances with the Cobalt 100 CPU and upcoming Maia 100 AI Accelerator.

The company faces competition from other startups like D-Matrix, which raised $110 million for its inference compute platform and Etched, which secured $120 million for a processor optimized for transformer-based AI models.

In March, Groq acquired Definitive Intelligence to establish Groq Systems, a business unit targeting enterprise clients, US government agencies and sovereign nations.

Groq aims to expand its reach among large enterprises, with over 75% of Fortune 100 companies represented in its developer community.

Groq is collaborating with Samsung’s foundry business to manufacture 4nm LPUs, expected to improve performance and efficiency compared to the first-generation 13nm chips.

Groq plans to deploy over 108,000 LPUs by the end of Q1 2025, with a target of reaching 1.5 million deployed chips by the end of 2025.

Also Read: The EU AI Act is Now in Effect

Top Sources Related to Groq Secures $640 Million in Funding to Challenge Nvidia in the AI Chip Industry (For R&D)

Tech Crunch:

Fortune:

Reuters:

Forbes:

Quartz:

The Register:

Trending

More From Author