Skip to content

Projects & Models

A project is a versioned workspace managed by Bridge Town. Each project belongs to a tenant (organization) and contains:

  • Models — Python files at model/<name>.py
  • Data sources — Parquet snapshots uploaded from CSV, Excel, or Google Sheets
  • Output files — Results from model execution runs

Projects maintain full version history. You can list branches, diff versions, and roll back to any previous commit.

Each user has a role on each project they can access:

RoleRead modelsWrite modelsDelete projectManage users
ViewerYesNoNoNo
EditorYesYesNoNo
OwnerYesYesYesYes

A model is a Python file stored at model/<name>.py in a project. Model names must be valid Python identifiers (letters, digits, underscores; max 128 characters).

  1. Createcreate_file writes a new .py file and commits it
  2. Readread_file returns the source code
  3. Patchpatch_file applies targeted edits from an instruction
  4. Updateupdate_file overwrites the file and commits
  5. Runrun(scope='project', mode='sync') executes run.py synchronously and returns results inline; run(scope='model', mode='sync') runs a single model file directly; run(mode='async')/get_run for background async execution; get_run_output fetches one completed run output by name
  6. Deletedelete_file removes the file and commits

Bridge Town’s supported default workflow is:

  1. create_file to scaffold the model file
  2. patch_file or update_file for iterative changes
  3. commit_files when a change spans multiple project files and should land in one commit
  4. Branch-based scenario analysis via create_branch + compare_branches for project-level comparisons, optionally focusing the returned diff on one output; use compare_runs when both completed run IDs already exist

Legacy generation/refinement tools may still exist in some deployments, but they are not part of the supported default path.

Projects can chain models in execution order by defining a PIPELINE list in run.py. Each model’s runtime output dict (its module-level result dict, or a legacy outputs dict) is written to /upstream/<model_name>/outputs.json before the next model runs, allowing downstream models to read it:

# run.py — define execution order
PIPELINE = ["revenue", "expenses", "summary"]
# model/expenses.py — read from the upstream revenue model
import json, pathlib
_upstream = pathlib.Path("/upstream/revenue/outputs.json")
if _upstream.exists():
rev = json.loads(_upstream.read_text())
monthly_revenue = rev.get("monthly_revenue", [100_000] * 12)
else:
# Standalone fallback when /upstream is not mounted.
monthly_revenue = [100_000] * 12

/upstream/ is a run-scoped, branch-scoped tmpfs: it exists only for the duration of the run call and is never persisted. It is distinct from /data/, which holds immutable Google Sheet and CSV snapshots that serve as external inputs.

See Multi-Model Pipelines for a complete walkthrough, including the recommended /upstream first, /data fallback read pattern and scenario-analysis integration.

Models can declare module-level inputs, outputs, and dependencies as lists of strings to expose a static contract. These declarations are read by the describe_model MCP tool without executing the code, making pipelines easier to reason about and maintain:

model/summary.py
inputs = ["monthly_revenue", "monthly_expenses"]
outputs = ["total_revenue", "total_expenses", "net_income"]
dependencies = ["revenue", "expenses"]

Pair contract metadata with a module-level result dict that holds the actual runtime values. result is what the run pipeline returns and what gets written to /upstream/ for downstream models. This avoids any clash between the outputs contract list and the runtime values:

result = {
"total_revenue": 1_440_000,
"total_expenses": 960_000,
"net_income": 480_000,
}

Rules:

  • Use a list, tuple, or set of strings for contract metadata.
  • Use a dict for runtime values, assigned to result.
  • outputs = {...} (dict) is still recognised as a legacy runtime output pattern when result is absent, but new models should prefer the outputs = [...] + result = {...} pairing.
  • All three contract variables are optional; omitting them produces warnings in describe_model but does not break execution.

Repeated Python logic — cohort waterfalls, driver parsing, period helpers, output formatting — belongs in lib/, not copied across model files.

Convention: place shared helpers at lib/<module>.py and import them from any model with package-style paths:

model/pnl.py
from lib.cohort import simulate_cohort
from lib.periods import quarter_labels
result = simulate_cohort(1_000_000)

This works because the project root (/repo) is always on sys.path inside the sandbox, so from lib.<module> import ... resolves as a regular Python package import. No configuration is required.

Managing lib/ files: use the same generic file tools as for models:

create_file(path="lib/cohort.py", content="...")
update_file(path="lib/cohort.py", content="...")
read_file(path="lib/cohort.py")

New projects seeded with the auto-discovery scaffold include an empty lib/__init__.py to mark the directory as a Python package.

Rules:

  • lib/ files are never auto-executed as models. Only model/*.py files are auto-discovered by run.py.
  • Do not use model-to-model imports (from model.customer_cohort import ...). Model files are executable entry points, not importable modules. Shared logic belongs in lib/.
  • lib/ is project-local. There is no supported mechanism for sharing code across projects.
"""Revenue forecast — 12-month projection with three product lines."""
MONTHS = 12
LINES = {
"SaaS": {"base": 50_000, "growth": 0.08},
"Services": {"base": 30_000, "growth": 0.03},
"Marketplace": {"base": 15_000, "growth": 0.12},
}
results = {}
for name, params in LINES.items():
monthly = []
revenue = params["base"]
for m in range(MONTHS):
monthly.append(round(revenue, 2))
revenue *= 1 + params["growth"]
results[name] = monthly
inputs = ["base_assumptions"]
outputs = ["monthly_revenue"]
dependencies = []
result = {"monthly_revenue": results}