Skip to content
BunBase BunBase BunBase Docs Alpha v0.1.0

Self-Hosting

BunBase is designed to run comfortably on a $6/mo VPS. This guide covers bare-metal deployment on Linux with a systemd service and Caddy for automatic HTTPS.

  • Linux VPS (Debian/Ubuntu recommended), 1 CPU, 512 MB RAM minimum
  • A domain name pointed at the VPS IP
  • Bun installed (bun.sh/install)
Terminal window
curl -fsSL https://bun.sh/install | bash
source ~/.bashrc
bun --version
Terminal window
adduser --disabled-password bunbase
mkdir -p /opt/bunbase /var/bunbase/data
chown -R bunbase:bunbase /opt/bunbase /var/bunbase/data
Terminal window
su - bunbase
git clone https://github.com/palmcode-ae/bunbase /opt/bunbase
cd /opt/bunbase
bun install --frozen-lockfile --production
Terminal window
cp .env.example /etc/bunbase.env
# Edit with your values — SECRET_KEY and ADMIN_SECRET are required in production.
nano /etc/bunbase.env

Generate strong secrets:

Terminal window
openssl rand -hex 32 # for SECRET_KEY
openssl rand -hex 24 # for ADMIN_SECRET

Create /etc/systemd/system/bunbase.service:

[Unit]
Description=BunBase server
After=network.target
Wants=network-online.target
[Service]
Type=simple
User=bunbase
Group=bunbase
WorkingDirectory=/opt/bunbase
EnvironmentFile=/etc/bunbase.env
ExecStart=/home/bunbase/.bun/bin/bun run apps/server/src/index.ts
Restart=on-failure
RestartSec=5s
StandardOutput=journal
StandardError=journal
SyslogIdentifier=bunbase
# Harden the process
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ReadWritePaths=/var/bunbase/data
[Install]
WantedBy=multi-user.target

Enable and start:

Terminal window
systemctl daemon-reload
systemctl enable --now bunbase
systemctl status bunbase
journalctl -u bunbase -f # follow logs

Install Caddy (caddyserver.com) then edit /etc/caddy/Caddyfile.

BunBase workers share port 8080 via reusePort — the OS kernel load-balances connections across them. Caddy only needs to proxy to a single address:

your-domain.com {
reverse_proxy localhost:8080 {
# Pass real client IP for rate limiting and audit logs
header_up X-Real-IP {remote_host}
header_up X-Forwarded-Proto {scheme}
# Health check — returns 503 when DB is unreachable
health_uri /api/v1/admin/health
health_interval 10s
health_timeout 5s
}
# Match BunBase's 1 GB maxRequestBodySize for file uploads
request_body {
max_size 1GB
}
}

When running multiple BunBase instances across machines, configure REDIS_URL on every node (required for distributed rate limiting, session cache, realtime fanout, and presence). Then point Caddy at all nodes:

your-domain.com {
reverse_proxy {
to node1:8080 node2:8080 node3:8080
lb_policy least_conn
health_uri /api/v1/admin/health
health_interval 10s
health_timeout 5s
header_up X-Real-IP {remote_host}
header_up X-Forwarded-Proto {scheme}
}
# WebSocket connections use ip_hash so each client stays
# on the same node for the duration of the connection.
@realtime path /realtime
reverse_proxy @realtime {
to node1:8080 node2:8080 node3:8080
lb_policy ip_hash
}
request_body {
max_size 1GB
}
}
Terminal window
systemctl restart caddy

Caddy automatically provisions and renews a Let’s Encrypt certificate.

Add your public domain to the environment file so OAuth callbacks, magic links, and storage URLs are correct:

Terminal window
# In /etc/bunbase.env
APP_URL=https://your-domain.com
PUBLIC_URL=https://your-domain.com
Terminal window
curl https://your-domain.com/api/v1/admin/health

Expected response:

{
"status": "ok",
"db": "ok",
"version": "0.1.0",
"uptime_ms": 1234,
"memory": { "rss_mb": 42, "heap_used_mb": 28, "heap_total_mb": 64 },
"server": { "pending_requests": 0, "pending_websockets": 0, ... },
"cluster": { "active_workers": 4, ... }
}
  • status is "ok" when everything is healthy, "degraded" when the DB worker is unreachable (returns 503).
  • db is "ok" or "error" — use this for load balancer health checks to automatically remove an instance when its DB connection is broken.

BunBase stores everything in DATA_DIR. Back it up with:

Terminal window
# SQLite — use the backup API to get a consistent snapshot
sqlite3 /var/bunbase/data/bunbase.db ".backup /var/bunbase/data/bunbase.db.bak"
# Then sync to remote
rsync -az /var/bunbase/data/ backup-host:/backups/bunbase/

Schedule via cron:

0 3 * * * /usr/local/bin/bunbase-backup.sh

See docker/docker-compose.yml for a ready-to-use Docker Compose setup.

Terminal window
cp .env.example .env
# Edit .env
docker compose -f docker/docker-compose.yml up -d

BunBase ships deploy configs for three managed platforms. All three use the same Docker image (docker/Dockerfile), mount a persistent volume at /data, and expose port 8080.

Note on internal port. The Dockerfile sets PORT=8080 and EXPOSE 8080. Keep this value unless you also change the Dockerfile.

Fly.io is the recommended managed option — cheap, fast cold starts, and proper persistent volumes on free and paid plans.

Prerequisites: install flyctl and run fly auth login.

Terminal window
# 1. Clone the repo and cd in
git clone https://github.com/palmcode-ae/bunbase && cd bunbase
# 2. Register the app (fly.toml is already present)
fly launch --no-deploy
# 3. Create the persistent volume (1 GB in US East)
fly volumes create data --size 1 --region iad
# 4. Set the required secret
fly secrets set SECRET_KEY=$(openssl rand -hex 32)
# 5. Set your public URL (replace with your actual app name)
fly secrets set PUBLIC_URL=https://bunbase.fly.dev
# 6. Deploy
fly deploy

After the first deploy, visit https://<your-app>.fly.dev/api/v1/admin/health to confirm it is running.

To use a custom domain: fly certs add your-domain.com.

Cost: the default config runs a single shared-1x-512mb machine that auto-stops when idle, so you only pay for actual compute time.


Railway auto-detects railway.json and builds from docker/Dockerfile.

Prerequisites: install the Railway CLI or connect via the dashboard.

  1. Go to railway.app → New Project → Deploy from GitHub repo.
  2. Select the BunBase repo. Railway detects railway.json automatically.
  3. Add environment variables in the Variables tab:
    • SECRET_KEY — generate with openssl rand -hex 32
    • PUBLIC_URL — set to your Railway-assigned domain once the service is created
    • DATA_DIR/data
  4. Add a Volume under the service settings, mounted at /data. Persistent volumes require the Pro plan or the Railway Volumes addon.
  5. Redeploy.
Terminal window
railway login
railway link # link to an existing project, or create one
railway up # deploys from the repo root

Set variables from the CLI:

Terminal window
railway variables set SECRET_KEY=$(openssl rand -hex 32)
railway variables set DATA_DIR=/data

Note: Without a persistent volume, the SQLite database and uploaded files are lost on each redeploy. Enable volumes before going to production.


Render auto-detects render.yaml when you connect your GitHub repo.

  1. Go to render.com → New → Blueprint → connect the BunBase repo.
  2. Render reads render.yaml and creates the web service and the 1 GB persistent disk automatically.
  3. In the service’s Environment tab, add SECRET_KEY as a secret env var:
    Terminal window
    openssl rand -hex 32 # paste the output as the value
  4. PUBLIC_URL is forwarded automatically from RENDER_EXTERNAL_URL — no manual step needed.
  5. Click Apply (or push a commit) to trigger the first deploy.

Verify once live:

Terminal window
curl https://<your-service>.onrender.com/api/v1/admin/health

Note: Persistent disks on Render require a paid plan (Starter or higher). The render.yaml sets plan: starter.

server {
listen 443 ssl;
server_name your-domain.com;
ssl_certificate /etc/letsencrypt/live/your-domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

The Upgrade and Connection headers are required for WebSocket (realtime) support.