GitHubBlog

Search Documentation

Search for a page in the docs

Async Lifecycle

OpenAlice isn't just a chatbot that responds when you talk to it. It has an autonomous lifecycle — things happen even when you're not looking. This page explains the event-driven architecture that makes this work.

The Event Bus

At the center is the EventLog — a persistent, append-only JSONL event bus. Everything that happens asynchronously flows through it:

CronEngine ──fires──→ EventLog ──subscribes──→ CronListener
                          │                    → Heartbeat
                          │                    → SnapshotScheduler
                          ↓
                   data/event-log/events.jsonl

The EventLog has two faces:

  • Disk — Append-only JSONL file, the source of truth. Survives crashes and restarts.
  • Memory — Ring buffer of the 500 most recent entries for fast queries. Rebuilt from disk on startup.

Subscribers register for specific event types. When an event is appended, all matching subscribers are notified synchronously. This fan-out is what connects the pieces.

Three Autonomous Actors

Three systems subscribe to cron.fire events, each with a different job:

CronListener — User-Defined Jobs

Handles jobs that the user or AI created. When a cron.fire event arrives for a non-internal job:

  1. Sends the job's payload to AgentCenter as a prompt
  2. Alice processes it through the full AI pipeline (tools, reasoning, response)
  3. The result is delivered via ConnectorCenter to the last-interacted channel
  4. Success/failure is logged as cron.done / cron.error

Jobs run serially — if one is already processing, the next fire is skipped. This prevents overlapping AI calls.

Heartbeat — Market Monitoring

Handles the special __heartbeat__ job. Similar to CronListener but with extra intelligence:

  1. Active hours guard — Skip if outside configured time window
  2. AI call — Alice evaluates market conditions
  3. Response parsing — Structured protocol: HEARTBEAT_OK (stay quiet) or CHAT_YES (send message)
  4. Dedup — Suppress identical messages within 24 hours
  5. Delivery — Route through ConnectorCenter

See Heartbeat for the full protocol.

SnapshotScheduler — Portfolio Capture

Handles the special __snapshot__ job. No AI involved — purely mechanical:

  1. Iterates all enabled trading accounts
  2. Calls buildSnapshot() for each (fetches positions, equity, P&L from broker)
  3. Stores snapshots as JSONL per account
  4. Failed accounts get one retry after 3 seconds

The Trading Lifecycle

Trading has its own async lifecycle that connects to this system:

User/AI Decision
    ↓
stage operations → commit → push (requires approval)
    ↓                              ↓
    ↓                     Guard Pipeline runs
    ↓                              ↓
    ↓                     Broker executes orders
    ↓                              ↓
    ↓                     ┌──── Post-Push Hooks ────┐
    ↓                     │  • Snapshot (immediate)  │
    ↓                     │  • EventLog recording    │
    ↓                     └──────────────────────────┘
    ↓
tradingSync (async — exchanges settle later)
    ↓
Order filled / cancelled / expired
    ↓
Sync commit recorded

Key lifecycle events:

  • Post-push — Immediately after orders are sent to the broker, a snapshot captures the account state. This is event-driven, not cron-driven.
  • Post-reject — When you reject a commit, a snapshot is also taken to record the state at the time of rejection.
  • Sync — Order settlement is asynchronous. The AI calls tradingSync to check for fills, which may happen seconds or hours after the push.

Internal vs User Jobs

The cron engine distinguishes jobs by naming convention:

PatternTypeHandler
__heartbeat__InternalHeartbeat system
__snapshot__InternalSnapshotScheduler
Everything elseUserCronListener → AI

Internal jobs (double-underscore prefix/suffix) are routed to their dedicated handlers. User jobs go through the CronListener → AgentCenter → ConnectorCenter pipeline.

Startup Sequence

On boot, the async systems start in dependency order:

  1. EventLog — Created first. Everything depends on it.
  2. CronEngine — Loads persisted jobs from data/cron/jobs.json. Arms timers.
  3. CronListener — Subscribes to cron.fire events.
  4. SnapshotScheduler — Registers/updates __snapshot__ job. Subscribes.
  5. Heartbeat — Registers/updates __heartbeat__ job. Subscribes.
  6. Plugins — Web, Telegram, MCP start and register connectors.

By the time plugins are up, the event bus is running and all subscribers are listening. The first cron fire after boot triggers the whole chain.

Error Resilience

  • Cron jobs — Failed jobs get exponential backoff: 30s → 1m → 5m → 15m → 1h. Resets on success.
  • Snapshots — Failed accounts get one retry. Failures are logged but don't crash the system.
  • Heartbeat — Errors are logged as heartbeat.error. The next scheduled fire tries again fresh.
  • EventLog — Dual-write to disk + memory. If the process crashes, the disk log survives and the memory buffer is rebuilt on restart.