OpenAI Symphony: Task Boards to Autonomous Code

OpenAI open-sources Symphony, a system that monitors project boards like Linear, spawns coding agents to implement tasks, and submits pull requests with CI verification and video walkthroughs.

OpenAI Symphony: Task Boards to Autonomous Code

OpenAI has released Symphony, an open-source orchestration system that connects project management boards to autonomous coding agents. Published on GitHub under the Apache 2.0 license, Symphony watches a task board — Linear, GitHub Issues, or similar — and spawns agents that implement each task, run tests, and submit pull requests for human review.

It's a quiet release. No blog post, no launch event — just a GitHub repo with a spec document and a reference implementation in Elixir. But the idea is significant: instead of a developer supervising an AI coding assistant, teams manage a work board while agents handle the coding autonomously.

Boards In, Pull Requests Out

Symphony introduces what OpenAI describes as a shift from "managing individual agent tasks to managing broader work objectives." The system operates at the project management level rather than the code editor level. You don't write prompts for a coding agent — you write task tickets on a board, and Symphony handles the rest.

Each completed task comes with proof of work: CI/CD status, code review feedback, complexity analysis, and video walkthroughs that explain what the agent did and why. This is meant to give human reviewers enough context to approve or reject pull requests without needing to trace through every line of generated code.

How It Works

The architecture is spec-driven. A SPEC.md file defines the project's rules, coding standards, and constraints. Symphony reads from this spec when spawning agents, ensuring all generated code follows the project's conventions.

The workflow is: task appears on board → Symphony picks it up → agent reads the spec and task description → agent implements the code → agent runs tests → agent submits a PR with documentation → human reviews and merges (or rejects).

The reference implementation is written almost entirely in Elixir (94.9%), with small amounts of Python and CSS. OpenAI provides two paths: build your own implementation from the SPEC.md specification, or use their reference Elixir application as a starting point.

Not Ready for Production

OpenAI is upfront about the project's maturity. The README describes it as a "low-key engineering preview for trusted environments only." This isn't production software — it's a proof of concept that demonstrates a particular approach to agent orchestration.

The project has just two contributors (both OpenAI engineers), about 4,900 stars and 296 forks at time of writing. There's no indication of when (or if) this will become a supported product. For now, it's a research artifact and a blueprint for teams who want to build something similar.

What If PMs Managed Agents?

The core insight behind Symphony is worth paying attention to even if the tool itself isn't ready for production. Most AI coding tools today operate at the "human sits with agent" level — you prompt, review, iterate, repeat. Symphony proposes a different model: humans work at the project management layer, agents work at the implementation layer, and pull requests are the interface between them. Models like GPT-5.4 with native computer-use capabilities could power this kind of orchestration at scale.

If this pattern proves viable, it could change how engineering teams scale. Instead of hiring more developers, you might add more agent capacity to your Symphony deployment. The human bottleneck moves from writing code to writing clear task specifications and reviewing PRs — skills that are arguably more about engineering judgment than programming ability. Tools like Google Workspace CLI that give agents API access to essential infrastructure will become prerequisites for this workflow.

The code is available on GitHub.