Ultrawork Hero

Ultrawork Manifesto

Human Intervention is a Failure Signal

HUMAN IN THE LOOP = BOTTLENECK
HUMAN IN THE LOOP = BOTTLENECK
HUMAN IN THE LOOP = BOTTLENECK

Think about autonomous driving. When a human has to take over the wheel, that's not a feature - it's a failure of the system. The car couldn't handle the situation on its own.

Why is coding any different?

When you find yourself:

...that's not "human-AI collaboration." That's the AI failing to do its job.

Oh My OpenCode is built on this premise: Human intervention during agentic work is fundamentally a wrong signal. If the system is designed correctly, the agent should complete the work without requiring you to babysit it.

Divider

Indistinguishable Code

Goal: Code written by the agent should be indistinguishable from code written by a senior engineer.

Not "AI-generated code that needs cleanup." Not "a good starting point." The actual, final, production-ready code.

This means:

If you can tell whether a commit was made by a human or an agent, the agent has failed.

Divider

Token Cost vs. Productivity

Higher token usage is acceptable if it significantly increases productivity.

Using more tokens to:

...is a worthwhile investment when it means 10x, 20x, or 100x productivity gains.

However:

Unnecessary token waste is not pursued. The system optimizes for:

Token efficiency matters. But not at the cost of work quality or human cognitive load.

Divider

Minimize Human Cognitive Load

The human should only need to say what they want. Everything else is the agent's job.

Two approaches to achieve this:

Approach 1: Ultrawork

Just say "ulw" and walk away.

You say: ulw add authentication

The agent autonomously:

  • Analyzes your codebase patterns and architecture
  • Researches best practices from official docs
  • Plans the implementation strategy internally
  • Implements following your existing conventions
  • Verifies with tests and LSP diagnostics
  • Self-corrects when something goes wrong
  • Keeps bouldering until 100% complete

Zero intervention. Full autonomy. Just results.

Approach 2: Prometheus + Atlas

When you want strategic control.

You say: 프로메테우스 에이전트 add authentication

Prometheus (Strategic Planner):

  • Conducts deep codebase research via parallel agents
  • Interviews you with intelligent, contextual questions
  • Identifies edge cases and architectural implications
  • Generates a detailed YAML work plan with dependencies

Atlas (Master Orchestrator):

  • Executes the plan via /start-work
  • Delegates tasks to specialized agents (Oracle, Frontend Engineer, etc.)
  • Manages parallel execution waves for efficiency
  • Tracks progress, handles failures, ensures completion

You architect. Agents execute. Full transparency.

In both cases, the human's job is to express what they want, not to manage how it gets done.

Divider

Predictable, Continuous, Delegatable

The ideal agent should work like a compiler: markdown document goes in, working code comes out.

Predictable

Given the same inputs:

...the output should be consistent. Not random, not surprising, not "creative" in ways you didn't ask for.

Continuous

Work should survive interruptions:

The agent maintains state. You don't have to.

Delegatable

Just like you can assign a task to a capable team member and trust them to handle it, you should be able to delegate to the agent.

This means:

Divider

The Core Loop

Human Intent → Agent Execution → Verified Result ↑ ↓ └──────── Minimum ─────────────┘ (intervention only on true failure)

Everything in Oh My OpenCode is designed to make this loop work:

Feature Purpose
Prometheus Extract intent through intelligent interview
Metis Catch ambiguities before they become bugs
Momus Verify plans are complete before execution
Orchestrator Coordinate work without human micromanagement
Todo Continuation Force completion, prevent "I'm done" lies
Category System Route to optimal model without human decision
Background Agents Parallel research without blocking user
Wisdom Accumulation Learn from work, don't repeat mistakes
Divider

The Future We're Building

A world where:

The agent should be invisible. Not in the sense that it's hidden, but in the sense that it just works - like electricity, like running water, like the internet.

You flip the switch. The light turns on. You don't think about the power grid.

That's the goal.