- How agents discover available tools (actions they can take)
- How agents access tasks (problems to solve)
- How agents receive rewards (feedback signals for RL training)
- How episodes progress until completion (via
finishedsignals)
Key Principle: Actions are Tools
A fundamental assumption in ORS:The only way agents interact with environments is by calling tools.This design decision has important benefits:
- Leverages existing capabilities: Major LLMs support function calling
- Clear interface boundary: Agent actions are explicit and well-defined
- Traceable interactions: Every action is a structured function call
- Type safety: Tools have schemas defining their inputs and outputs
submit tool which it can use to submit an answer to a prompt:
Primary use case: Reinforcement Learning
ORS is designed to accelerate agentic reinforcement learning, by making it easier to define and interact with RL environments.How RL works with ORS
- An agent is connected to a task from an ORS environment via its exposed tools, and the initial prompt detailing the task to be accomplished.
- The agent executes tools, receives tool output and rewards, and continues until a finished signal.
- At the end of the episode, we have a trajectory as well as rewards we can use for credit assignment.
- We use an RL algorithm of choice, e.g. GRPO based policy gradient.
Example: Math Problem Solving
Consider training an agent on math problems. Here’s the protocol flow:1.0) as part of the gradient update. For example, in GRPO it would be used to calculate the group advantage.
Secondary use case: Agentic Evaluation
While designed for RL training, ORS can also be used for agentic evaluation:- Standardised benchmarks: Common interface across different environments
- Train/test splits: Tasks are organised into clear training/evaluation splits
- Reproducible results: The environment is standardised for different agents
- Diverse task types: ORS supports tasks from basic question/answer environments to more complicated agentic workflows involving sandbox execution and computer-use.
Core Components
An ORS server provides access to four core components:1. Tools
Tools are the actions available to agents. Each tool has:- A name (e.g.,
bash,submit,read_file) - A description explaining what it does
- An input schema (JSON Schema) defining parameters
- A return type (ToolOutput with blocks, reward, finished)
2. Tasks
Tasks are the problems agents need to solve. Each task is a JSON object containing problem-specific data:3. Splits
Splits can be used to organise tasks into categories. For example, standard splits can be used:train- Tasks for training agentsvalidation- Tasks for hyperparameter tuningtest- Tasks for final evaluation
4. Prompts
Prompts are the initial instructions given to agents for each task. They’re returned as blocks (text or images):Episodes are sessions
In ORS, a session is an RL episode.Episode Lifecycle
finished: true.
This is different from typical API sessions - there’s semantic meaning to when an episode ends. It represents task completion (success or failure).
Episode Example
Episode 1: Single-step (correct answer)Rewards
Rewards are numeric feedback signals that enable RL training.Reward Design
- Sparse rewards: Only at task completion (0 or 1)
- Dense rewards: After each action (incremental progress)
- Shaped rewards: Guide agent toward solution
Protocol Overview
ORS uses HTTP + Server-Sent Events for communication:HTTP for Control
Standard REST endpoints for:- Listing tools, splits, tasks
- Creating/deleting sessions
- Health checks
SSE for Tool Execution
Tool calls return results via Server-Sent Events:- Chunks large responses into smaller pieces for reliable delivery
- Keeps connections alive during long-running tool calls
- Allows clients to reconnect and resume results via task IDs
Language-Agnostic
Because ORS is HTTP-based, ORS can be implemented in any language:- Python: OpenReward SDK (reference implementation)
- TypeScript: Custom server with Express/Fastify
- Go: Custom server with stdlib http
- Rust: Custom server with Actix/Axum
ORS vs MCP
Both ORS and MCP involve agents calling tools, but they serve different purposes: MCP (Model Context Protocol):- Purpose: Connect LLMs to tools, data sources, workflows
- Use case: General-purpose tool access
- Protocol: JSON-RPC over stdio/SSE
- Key feature: Seamless tool integration
- Purpose: Connect agents to RL training environments
- Use case: Training and evaluating agents
- Protocol: HTTP + SSE
- Key features: Rewards, episodes, task organization
What’s Different?
ORS adds RL-specific features:| Feature | MCP | ORS | Why ORS Needs It |
|---|---|---|---|
| Rewards | No | Yes | RL training signal |
| Finished | No | Yes | Episode termination |
| Tasks | No | Yes | Problem organization |
| Splits | No | Yes | Train/test separation |
Can They Work Together?
Yes! They serve complementary purposes:- MCP: Agent uses tools to access external data/APIs
- ORS: Agent operates in structured RL environment with rewards
Next Steps
Quick Start
Build your first ORS server with GSM8K example
Protocol Specification
Dive into the HTTP API details
Core Concepts
Understand tools, tasks, rewards, and prompts
Implementation Guide
Learn how to implement an ORS server
Key Takeaway: ORS brings RL to language models by providing a standardised protocol with rewards, episode structure, and task organization.

