axe
Reference

Development

Development setup, test commands, deploy paths, and structural rules for the Hyperliquid module backend.

This repo holds the technical backend for Axe's Hyperliquid module, the MkDocs build that ships these docs, and the Next.js landing page. Some directory and Cloud Run service names still use the legacy hlq codename — the product is Axe; renaming infrastructure is deferred work.

Setup

cd work/hlq
pip install -e ".[dev]"

Running tests

pytest tests/ -v

Tests use a mock bridge fixture (conftest.py) — no BigQuery credentials needed.

Current: 89 tests, all passing.

Project structure

work/hlq/
├── src/hlq/           # Source code (18 modules, 2,126 lines)
├── tests/             # Tests (5 files, 89 tests)
├── bridge/            # Bridge artifacts (read-only, from RL)
├── skills/            # Claude Code skills (3)
├── sandbox/           # Sandbox feedback loop
├── docs/              # This documentation (MkDocs)
├── CLAUDE.md          # Claude Code context rules
├── NOTEBOOK.md        # Chronological dev log
├── RL_REQUESTS.md     # Request channel to RL team
└── pyproject.toml     # Package config

Dev rules

  • Inference only — No training loops, gradients, or reward functions.
  • Bridge-first — Never hardcode values that come from bridge artifacts.
  • No RL imports — Never import from prime_rl, verifiers, search_prime_env.
  • Run tests before committing: pytest tests/

Building docs

# Preview locally (hot-reload, ~30MB RSS)
mkdocs serve -a 0.0.0.0:8400
 
# Build static site
mkdocs build
 
# Output goes to site/

Deploying docs to Cloud Run

The docs are hosted on Cloud Run at a permanent URL. The Cloud Run service is named hlq-docs for legacy reasons:

https://hlq-docs-i6dghycg5q-uc.a.run.app/

The URL is stable across deployments — each gcloud run deploy creates a new revision behind the same URL, with automatic traffic switching.

Push an update

cd ~/work/hlq
 
# 1. Build the static site
mkdocs build
 
# 2. Build the container image
sudo docker build -t hlq-docs .
 
# 3. Authenticate Docker with Artifact Registry
gcloud auth print-access-token | \
  sudo docker login -u oauth2accesstoken --password-stdin us-central1-docker.pkg.dev
 
# 4. Tag and push
sudo docker tag hlq-docs \
  us-central1-docker.pkg.dev/cs-poc-dzog4rv1rdnbpi2aawjhbvk/cloud-run-source-deploy/hlq-docs:latest
sudo docker push \
  us-central1-docker.pkg.dev/cs-poc-dzog4rv1rdnbpi2aawjhbvk/cloud-run-source-deploy/hlq-docs:latest
 
# 5. Deploy new revision (same URL, zero-downtime)
gcloud run deploy hlq-docs \
  --image us-central1-docker.pkg.dev/cs-poc-dzog4rv1rdnbpi2aawjhbvk/cloud-run-source-deploy/hlq-docs:latest \
  --region us-central1

Infrastructure details

SettingValue
Service namehlq-docs
Regionus-central1
Projectcs-poc-dzog4rv1rdnbpi2aawjhbvk
Image registryus-central1-docker.pkg.dev/.../cloud-run-source-deploy/hlq-docs
Port8080 (nginx)
Memory256Mi
Min instances0 (scales to zero when idle — no cost)
Max instances1
AuthUnauthenticated (public)

Deploying the current Next.js web surface

The public landing page lives in work/hlq/web. The Cloud Run service is named hlq-web for legacy reasons.

See web/WIRING_GUIDE.md in the repo root for the detailed backend wiring notes. The short deploy path is:

cd ~/work/hlq/web
npm run build
sudo docker build -t hlq-web .
gcloud auth print-access-token | \
  sudo docker login -u oauth2accesstoken --password-stdin us-central1-docker.pkg.dev
sudo docker tag hlq-web \
  us-central1-docker.pkg.dev/cs-poc-dzog4rv1rdnbpi2aawjhbvk/cloud-run-source-deploy/hlq-web:latest
sudo docker push \
  us-central1-docker.pkg.dev/cs-poc-dzog4rv1rdnbpi2aawjhbvk/cloud-run-source-deploy/hlq-web:latest
gcloud run deploy hlq-web \
  --image us-central1-docker.pkg.dev/cs-poc-dzog4rv1rdnbpi2aawjhbvk/cloud-run-source-deploy/hlq-web:latest \
  --region us-central1

How it works

The Dockerfile at the repo root builds a minimal nginx:alpine image that serves the static site/ directory. Cloud Run runs this container and routes HTTPS traffic to it. Revisions are immutable — each push creates a new one, and the previous revision remains available for rollback via gcloud run services update-traffic.

GCS snapshots

Upload a new snapshot:

TS=$(date -u +%Y%m%dT%H%M%SZ)
git bundle create /tmp/hlq-${TS}.bundle --all
tar czf /tmp/hlq-${TS}.tar.gz -C . --exclude='.git' --exclude='__pycache__' --exclude='.pytest_cache' --exclude='*.egg-info' .
gsutil cp /tmp/hlq-${TS}.bundle gs://artemis-hl/hlq/bootstrap/
gsutil cp /tmp/hlq-${TS}.tar.gz gs://artemis-hl/hlq/bootstrap/

Retrieve:

gsutil cp gs://artemis-hl/hlq/bootstrap/hlq-<TIMESTAMP>.bundle /tmp/
git clone /tmp/hlq-<TIMESTAMP>.bundle hlq

Key documents

DocumentPurpose
NOTEBOOK.mdChronological dev log (what changed, why, results)
RL_REQUESTS.mdSingle channel for requests to RL backend team
POLICY_DELTA_ROADMAP.mdFeature roadmap and planned phases
CLAUDE.mdContext rules for Claude Code sessions

Adding a new intent family

  1. Add regex pattern in semantics.pyinfer_intent_family()
  2. Add RouteAction to bridge action_space.json
  3. Add SQL template to bridge sql_templates/
  4. Map intent → template in bridge intent_priors/intent_template_map.json
  5. Add test in test_search_pipeline.py
  6. Update bridge manifest version and checksums

Transitional surface

Intent classification is currently regex_v1 (transitional). REQ-006 transfers ownership to the RL team for model-based classification.

Adding a new MCP tool

  1. Add function with @mcp.tool() decorator in mcp.py
  2. Call backend.search() or backend.status() as appropriate
  3. Include "provenance": result.get("provenance", {}) in the response
  4. Add corresponding CLI command in cli.py if applicable
  5. Add skill guide in skills/ if the tool has non-obvious usage patterns

On this page