MatX Raises $500M to Build AI Chips That Beat Nvidia

MatX, a semiconductor startup founded by two former Google engineers, has raised more than $500 million in a Series B funding round to develop and produce a new class of AI chip designed specifically for large language models. The round was led by Jane Street and Situational Awareness, an investment fund established by former OpenAI researcher Leopold Aschenbrenner. Additional investors include Marvell Technology, Spark Capital, NFDG, Alchip Technologies, and Stripe co-founders Patrick and John Collison, among others.
The company was founded in 2023 by Reiner Pope and Mike Gunter, who both departed Google in 2022 after years working on the company’s tensor processing units. Pope led the software side of Google’s TPU operations, while Gunter was the lead hardware designer of the same chips. Their shared background gives MatX a technical foundation rarely found in early-stage semiconductor startups.
MatX’s first product, the MatX One, is built around an architecture called a splittable systolic array. This design addresses a long-standing trade-off in AI chip development: large systolic arrays offer excellent energy and area efficiency but struggle with the smaller, irregular matrix shapes that are common in LLM workloads. The splittable approach allows the chip to maintain high utilization across a range of matrix sizes, making it more flexible than traditional systolic designs.
The MatX One also integrates two memory technologies that chip designers have typically kept separate. High-bandwidth memory, or HBM, is used by leading chips such as Nvidia’s GPUs to handle the massive data volumes required during model training. Static random access memory, or SRAM, is favored by inference-focused chip designs for its low latency when processing individual user queries. MatX combines both in a single product, with the stated goal of delivering strong performance across training, prefill, decode, and reinforcement learning workloads without forcing customers to choose between speed and scale.
The company claims the MatX One outperforms Nvidia’s upcoming Rubin Ultra processor on a measure of computing performance per square millimeter of chip area. Its broader target is to deliver throughput and latency results ten times better than Nvidia’s current GPU lineup for LLM workloads. MatX has stated it is willing to sacrifice performance on smaller models and low-volume tasks in order to optimize exclusively for frontier-scale LLM applications, a deliberate narrowing of scope that separates it from general-purpose chip competitors.
MatX plans to manufacture its chips through Taiwan Semiconductor Manufacturing Company. The company expects to complete its final chip design in 2026, with customer shipments targeted to begin in 2027. A portion of the new funding will be directed toward reserving TSMC manufacturing capacity and securing supply chain components ahead of production, ensuring the company can scale quickly once the design is ready.
The company currently employs around 100 people and is not building a large sales organization. Its intended customers are a small number of leading AI laboratories, including developers operating at the scale of OpenAI and Anthropic. Top AI developers are increasingly sourcing compute from multiple chip suppliers rather than relying entirely on Nvidia, a shift that opens a realistic path to market for specialized alternatives.
This is MatX’s largest raise to date. The company previously secured a Series A of approximately $100 million in 2024, also led by Spark Capital, at a reported valuation above $300 million. MatX has not disclosed its current valuation but confirmed it is now worth several billion dollars. Its closest competitor, Etched, raised $500 million at a $5 billion valuation in January 2026, providing a reference point for where the market is pricing specialized LLM chip startups at this stage of development.



