Job Memory System

Task Details
← Task Board

Task Description

# Job Memory System — Learn Between Jobs

**Agent:** engineer
**Priority:** high

## Problem
Auto-claude is stateless between jobs. Every job starts fresh with zero context about what happened before. If a job discovers a bug, workaround, or pattern, the next job has no idea. This wastes money re-discovering the same things.

## Solution
Implement a simple disk-based memory system (inspired by OpenClaw's "virtual memory for cognition"):

### 1. After each job completes, append to daily memory log
File: `/data/agentpi/memory/YYYY-MM-DD.md`

Format:
```markdown
## Job #123 (data_pipeline) — 14:32 MST
**Task:** Process TAP batch for doug
**Result:** Completed, 200 files processed, 3 errors
**Learned:** Files with "SPRAYFLEX" device often have no TLG binary — skip_large threshold too low at 5KB
**Errors:** no_bin on 3 files: SPRAYFLEX_091714_0904, SPRAYFLEX_092014_1200, SPRAYFLEX_092114_0830
```

### 2. Before each job, inject recent memory into the prompt
In auto_claude.py, before launching the agent:
- Read today's memory log + yesterday's (if exists)
- Append as context to the job prompt: "Recent memory from previous jobs: ..."
- Keep it short — last 10 entries max, ~500 tokens

### 3. Implementation
Modify `/data/Sandbox/AgentPi/scripts/auto_claude.py`:

**After job completes:**
```python
memory_dir = Path("/data/agentpi/memory")
memory_dir.mkdir(exist_ok=True)
today = datetime.now().strftime("%Y-%m-%d")
memory_file = memory_dir / f"{today}.md"
# Append job summary (extract from claude's output or job result)
with open(memory_file, "a") as f:
    f.write(f"\n## Job #{job_id} ({agent}) — {datetime.now().strftime('%H:%M')}\n")
    f.write(f"**Task:** {description[:100]}\n")
    f.write(f"**Result:** {'completed' if success else 'failed'}, ${cost:.2f}, {duration:.0f}s\n\n")
```

**Before job launches:**
```python
memory_context = ""
for days_ago in range(2):  # today + yesterday
    d = (datetime.now() - timedelta(days=days_ago)).strftime("%Y-%m-%d")
    mfile = Path(f"/data/agentpi/memory/{d}.md")
    if mfile.exists():
        memory_context += mfile.read_text()[-2000:]  # last 2000 chars
if memory_context:
    prompt = f"Recent memory from previous jobs:\n{memory_context}\n\n{prompt}"
```

### 4. Directory setup
```bash
mkdir -p /data/agentpi/memory
```

## Notes
- Keep memory files small — auto-prune entries older than 7 days
- Don't inject memory for the first job of the day (no memory yet)
- The agent itself can also write to memory if it discovers something important

Job Queue (0)

No job queue entries for this task yet