Articles
Updated
icon

The End of Substrate

We’ve decided to pivot from building Substrate, and spin down the product. This was a difficult decision – many individuals and companies expressed a lot of excitement about our simple, unified take on AI infra. If you’re an active Substrate user, look out for an email with details – we'll work directly with your team to help make the migration off Substrate as smooth as possible.

We aren’t quite ready to share what we’re building next – but it’s related to synthetic software, and we’re using it to solve ARC-AGI. (Come talk to us in our new Discord to ask us about it or apply for beta access.)

  • Compound AI systems are undoubtedly part of the future, and we still believe these systems should be described as computation graphs, executed with optimal parallelism and minimal IPC latency.
  • The core of our product was the inference service. At scale, inference can be pretty good business, but it's a crowded space. We were primarily up against open-source vendors (Together, Fireworks, Fal etc), but the menu of options for a given use case also includes closed-source vendors (OpenAI, Anthropic, Deepgram etc), and hyperscalers (Azure, Google, AWS). We had a differentiated offering: we were the first multi-step inference engine (and we hope to inspire additional players in this space, like Baseten Chains). The benefits of our approach were clear to some large users, like Substack. But as we chatted with more large users post-launch, we realized it would be challenging to reach the level of scale required to make the inference business work.
  • Beyond inference, Substrate was a framework for describing distributed remote computation, deeply integrated with a hybrid inference & workflow execution engine, managed vector storage, and a built-in code interpreter. We considered a softer pivot: abandoning inference, and focusing on Vercel-like whitelabeling and aggregation of existing services.
  • We may open-source the core of Substrate for enterprises interested in deploying open-source LMs and custom models on their own GPU infra – with ergonomic auto-generated SDKs for product teams. If this sounds compelling, talk to us.

We've published a series of articles distilling some of our learnings along the way, and we'll continue writing publicly as we build the next product. Onward!

We're building in public and writing about it.

Subscribe for updates in your inbox.