Liszt AI Pre-Seed
- We are building Liszt AI.
- Liszt AI makes LLM serving programmable for agentic workflows.
- Today’s inference servers were built for chat: prompt in, tokens out.
- Agents are different. They branch, call tools, retry, verify, and reuse context.
- Forcing agents through a chat API creates extra round trips, repeated prefills, and inefficient KV-cache management.
- Pie lets agent logic run inside the serving system as inferlets.
- Inferlets use workflow knowledge to control KV cache, I/O, and forward passes.
- The result: higher performance, lower latency, lower cost, and better answers.
- Pie is from a research project at Yale CS and first described at SOSP 2025.
What we are raising to build
- Turn Pie into the serving stack for agentic workloads.
- Ship built-in inferlets for common agent workflows.
- Grow the open-source developer and user community.
- Work with teams running agents on open models.
We are looking for investors who can help with
- Introductions to AI infra teams and agent builders.
- Open-source growth and developer adoption.
- Design partners and early customers.
- Hiring strong systems and product talent.
Request a meeting
We are taking pre-seed investor conversations now.
Book an investor meeting