Project Portability with `.btx` Archives
A .btx archive is a portable, self-contained snapshot of a single Bridge
Town project: its git history, dashboards, recent runs, and the workspace
metadata required to reconstruct it elsewhere. The bridge-town CLI is a thin
wrapper over the same REST/MCP surfaces that the web UI and AI agents call —
exporting and importing a .btx is purely a matter of moving an opaque ZIP
between deployments.
When to use .btx
Section titled “When to use .btx”| Scenario | What you do |
|---|---|
| Offline backup of a critical project | bridge-town export to local disk |
| Move a project from prod to a self-hosted dev stack | bridge-town export from prod, bridge-town import into local Docker Compose |
| Migrate a project to a new Bridge Town instance | bridge-town export from old, bridge-town import on new |
| Audit / compliance snapshot of project state | bridge-town export and store the .btx in your evidence vault |
.btx archives are project-centric, not workspace-centric: one archive
contains exactly one project. To move multiple projects, run bridge-town export once per project. See the archive design
spec
for the format details.
Installation
Section titled “Installation”The bridge-town CLI ships with the Bridge Town Core repository:
git clone https://github.com/gregorycarter/bridge-town-core.gitcd bridge-town-corepoetry install --no-interaction --no-root# Optional: symlink the wrapper somewhere on your PATH.ln -s "$(pwd)/scripts/bridge-town" /usr/local/bin/bridge-townIf you have not symlinked it, invoke the wrapper script directly
(./scripts/bridge-town) or call the Python module
(poetry run python -m services.local_runner.btx_cli).
Authentication and configuration
Section titled “Authentication and configuration”The CLI reuses the same auth/config conventions as bt model run and the rest
of the Bridge Town local/prod tooling:
| Source | Variable / file | What it sets |
|---|---|---|
| Environment variable (preferred) | BT_API_TOKEN | A btk_… API token from Settings → API Tokens in the Bridge Town UI |
| Environment variable (optional) | BT_API_ENDPOINT | The MCP server base URL (defaults to https://api.bridgetown.builders; set to http://localhost:8000 for a local Docker Compose stack) |
| Config file fallback | .bt/config.json | JSON with api_token, api_endpoint keys (used when env vars are not set) |
The same token works with the REST API, the MCP server, and the CLI; if you
already have one for bt model run --ci, reuse it.
The server enforces every business rule (RBAC, conflict policy, plan gating, idempotency, schema-version compatibility). The CLI is purely a transport client — it never invents or relaxes server-side checks.
Feature flags
Section titled “Feature flags”Both export and import are gated by feature flags on the server. If you self-host Bridge Town, set the following before the CLI will succeed:
| Variable | Default | Effect when false |
|---|---|---|
BTX_EXPORT_ENABLED | false | All export endpoints return 501 Not Implemented |
BTX_IMPORT_ENABLED | false | All import endpoints return 501 Not Implemented |
For SaaS users on bridgetown.builders, both flags are managed by the Bridge
Town team. If bridge-town export returns 501, contact support — your tenant
may be on a deployment where the feature is still rolling out.
Exporting a project
Section titled “Exporting a project”bridge-town export <project_slug> [-o OUTPUT.btx] [options]Required arguments
Section titled “Required arguments”| Argument | Description |
|---|---|
<project_slug> | The project’s repo_name slug as shown in the Bridge Town UI URL (e.g. revenue-forecast). The CLI scopes the request to the workspace your BT_API_TOKEN belongs to. |
Options
Section titled “Options”| Flag | Default | Description |
|---|---|---|
-o, --output | <project_slug>.btx in the current directory | Where to write the archive. Parent directories are created if needed. |
--include-runs N | 5 | Number of most-recent model runs to embed in the archive. Set to 0 to omit runs entirely. The server caps this at BTX_EXPORT_RUNS_MAX (default 25). |
--include-dashboards / --no-include-dashboards | --include-dashboards | When enabled, cached dashboard HTML is pulled from the source instance’s S3 bucket and embedded in the archive so the importer can re-host the rendered dashboards without re-running the model. |
--poll-interval S | 2.0 | Seconds between status polls while the export job runs. |
--timeout S | 600 | Maximum total seconds to wait for the export to finish. |
Example
Section titled “Example”export BT_API_TOKEN=btk_yourtokenherebridge-town export revenue-forecast -o ./backups/revenue-forecast.btx --include-runs 10Expected stderr (progress) and stdout (machine-readable result):
# stderrStaging export for project 'revenue-forecast'...Export job queued: 9b7c… (status=pending)Polling export 9b7c…: status=runningPolling export 9b7c…: status=successRequesting download URL...Downloading archive to backups/revenue-forecast.btx ...Wrote 4738294 bytes to backups/revenue-forecast.btx
# stdout{ "archive_size_bytes": 4738294, "bytes_written": 4738294, "download_url": null, "download_url_expires_at": "2026-05-01T13:14:00+00:00", "job_id": "9b7c…", "output_path": "backups/revenue-forecast.btx", "status": "success", ...}The CLI emits one JSON object per invocation on stdout so you can pipe it
into jq, e.g. bridge-town export demo | jq '.bytes_written'. All progress
messages go to stderr.
Importing a project
Section titled “Importing a project”bridge-town import <archive_path> [options]Required arguments
Section titled “Required arguments”| Argument | Description |
|---|---|
<archive_path> | Path to the .btx file on local disk. URL-pull (passing an https://… URL) is not yet supported. |
Options
Section titled “Options”| Flag | Default | Description |
|---|---|---|
--target-workspace-id UUID | The workspace your BT_API_TOKEN authenticates against | Override target workspace UUID for callers who are members of multiple Bridge Town workspaces. The server verifies you have at least editor role on the target workspace. |
--repo-name NAME | The archive’s original repo_name | Rename the imported project to a different slug. |
--conflict-policy {fail,skip,overwrite} | fail | What to do when the target slug already exists in the workspace. fail returns 409 Conflict; skip is a no-op; overwrite requires the caller to be a workspace owner. |
--idempotency-key KEY | none | A 24-hour deduplication key. Repeating the same key returns the original job_id instead of starting a duplicate import. Strongly recommended in scripted / CI use. |
--poll-interval S | 2.0 | Seconds between status polls. |
--timeout S | 600 | Maximum total seconds to wait for the import to finish. |
Example
Section titled “Example”export BT_API_TOKEN=btk_yourtokenherebridge-town import ./backups/revenue-forecast.btx \ --conflict-policy fail \ --idempotency-key "$(uuidgen)"Expected output:
# stderrUploading backups/revenue-forecast.btx (4739012 bytes including framing)...Import job queued: 4d2e… (status=pending)Polling import 4d2e…: status=runningPolling import 4d2e…: status=success
# stdout{ "duplicate_of": null, "job_id": "4d2e…", "message": "Import complete", "status": "success", "target_repo_name": "revenue-forecast", "target_workspace_id": "f47ac10b-58cc-4372-a567-0e02b2c3d479"}If the same --idempotency-key is reused, the CLI prints
Idempotent retry detected — server returned existing job … to stderr and the
duplicate_of field in the JSON points at the original job_id.
Production-to-local workflow
Section titled “Production-to-local workflow”A common use case is exporting a production project, booting a local self-hosted stack, importing the archive, and inspecting/running it offline.
# 1. Export from production.export BT_API_TOKEN=btk_prod_tokenexport BT_API_ENDPOINT=https://api.bridgetown.buildersbridge-town export revenue-forecast -o ./backups/revenue-forecast.btx
# 2. Boot a local Bridge Town stack (see the Self-Hosting guide).cd ~/code/bridge-town-corecp .env.example .env# Add BTX_EXPORT_ENABLED=true and BTX_IMPORT_ENABLED=true to .envmake dev-init
# 3. Mint a local API token from the local UI.# Open http://localhost:3000, log in via Auth0, go to Settings → API Tokens.export BT_API_TOKEN=btk_local_tokenexport BT_API_ENDPOINT=http://localhost:8000
# 4. Import the archive into the local stack.bridge-town import ./backups/revenue-forecast.btx --idempotency-key "$(uuidgen)"
# 5. Open the project in the local UI and inspect dashboards / runs.open "http://localhost:3000/projects/revenue-forecast"
# 6. Run the model locally — same token, same project slug.bt model run --project revenue-forecast --dir ./checkout-of-imported-repoNotes:
- Run the model locally by cloning the imported repo from local Gitea
(
http://localhost:3001/{tenant}/revenue-forecast.git) and usingbt model runfrom that working tree. The.btxarchive ships full git history, so every branch and commit is preserved. - Cached dashboards in the archive are re-uploaded to the local MinIO bucket on import so the dashboard pages render immediately without needing to re-execute the model.
- External data sources (Google Sheets, etc.) require fresh local credentials — see the troubleshooting section below.
Troubleshooting
Section titled “Troubleshooting”Schema version too new / incompatible_archive
Section titled “Schema version too new / incompatible_archive”The archive was produced by a newer Bridge Town deployment than your local
build understands. Either upgrade the local stack to the version listed in the
error, or re-export from the source after pinning to a compatible build. The
archive’s manifest.json records compatibility_metadata.bridge_town_version
— diff that against your local build.
Missing credentials for external data sources
Section titled “Missing credentials for external data sources”Bridge Town does not export OAuth tokens, encrypted refresh tokens, or data-source credentials of any kind. The archive contains the data-source configuration (Google Sheet URLs, schedules, mappings) but the importing user must re-authorize each source on the target instance:
- Open the imported project in the target UI.
- For each data source listed under Data, click Reconnect and complete the OAuth flow.
- Trigger a fresh snapshot pull (
refresh_data_sourceMCP tool) to populate the target instance’s S3 bucket.
The runs embedded in the archive remain readable because they reference the captured snapshot at the time of export — only new runs against external data sources need fresh credentials.
Archive too large or upload timeout
Section titled “Archive too large or upload timeout”The default --timeout 600 covers archives up to roughly 1 GiB on a typical
home connection. For larger archives:
- Re-run the export with fewer runs:
bridge-town export <slug> --include-runs 0. - Drop dashboards:
bridge-town export <slug> --no-include-dashboards. - Increase the CLI timeout:
bridge-town import archive.btx --timeout 3600. - Split the project into smaller scopes if you hit the server-side
mcp_max_body_sizelimit (default 150 MiB for multipart uploads).
Gitea restore failure (reconstruction_failed)
Section titled “Gitea restore failure (reconstruction_failed)”Most Gitea reconstruction failures come from one of:
- The target Gitea instance does not have the importing user provisioned. Run
make dev-initon a local stack so the script seeds an admin token. GITEA_ADMIN_TOKENis missing or revoked in the target environment. Mint a new admin token in Gitea and put it in.env, thendocker compose restart mcp_server.- The target Gitea is out of disk space. Bridge Town stores full git history
in the archive (
bundle.git), which can be large for projects with many branches.
The import job’s error_message field captures the underlying Gitea error —
inspect it via bridge-town import stdout or the Imports page in the UI.
S3 object restore failure (reconstruction_failed)
Section titled “S3 object restore failure (reconstruction_failed)”If the archive includes cached dashboard HTML or run outputs, the importer re-uploads them to the target instance’s S3 bucket. Failures here usually mean:
- Local stack: MinIO is not running. Check
docker compose ps minioand theS3_ENDPOINT_URL/AWS_ACCESS_KEY_IDvalues in.env. - AWS deployments: the IAM role for the importer lacks
s3:PutObjecton the configuredS3_BUCKET. - Bucket-policy misconfiguration: object ACLs or KMS encryption mismatches block the put. The error message includes the underlying boto3 exception text.
Project export is not enabled / 501 Not Implemented
Section titled “Project export is not enabled / 501 Not Implemented”The server has the corresponding feature flag turned off. Set
BTX_EXPORT_ENABLED=true (or BTX_IMPORT_ENABLED=true) in the server’s
environment and restart the MCP server. For SaaS users, contact support.
403 Forbidden on import
Section titled “403 Forbidden on import”Either the calling token does not have at least editor role on the target
workspace, or the --conflict-policy overwrite was used by a non-owner. Use
bridge-town import --conflict-policy fail and have an owner perform the
overwrite if needed.
See also
Section titled “See also”- Self-Hosting with Docker Compose — how to boot the local stack referenced in the production-to-local workflow above.
export_projectandimport_projectMCP tools — the same surfaces invoked by AI agents.- The
.btxarchive design spec — full schema, RBAC matrix, and audit-event catalogue.