# Prerequisites

This document outlines everything you need to prepare **before** running `sfp server init`. Completing these steps in advance will ensure a smooth installation.

Download the sfp CLI from [source.flxbl.io/flxbl/sfp-pro/releases](https://source.flxbl.io/flxbl/sfp-pro/releases). Install it on whichever machine will run the init command — your local workstation (remote via SSH) or the server itself (local mode).

***

## 1. Server Requirements

### Hardware

| Tier            | vCPU | RAM   | Disk       | Suitable For                               |
| --------------- | ---- | ----- | ---------- | ------------------------------------------ |
| **Minimum**     | 4    | 16 GB | 100 GB SSD | Small team (< 10 developers)               |
| **Recommended** | 8    | 32 GB | 250 GB SSD | Medium team (10–50 developers)             |
| **Production**  | 16   | 64 GB | 500 GB SSD | Large team (50+), heavy parallel workflows |

> Disk should be SSD. Workflow orchestration and database queries are I/O intensive.

### Operating System

* **Linux x86\_64** (Ubuntu 22.04+ or Debian 12+ recommended)
* ARM64 (e.g., AWS Graviton) is **not supported** at this time

### Software

| Software       | Minimum Version | How to Check             |
| -------------- | --------------- | ------------------------ |
| Docker Engine  | 24.x+           | `docker --version`       |
| Docker Compose | v2.x (plugin)   | `docker compose version` |
| sfp CLI        | Latest          | `sfp --version`          |

> Docker Compose must be the v2 plugin (`docker compose`), not the legacy standalone `docker-compose`.

***

## 2. Network & DNS

### Step 1: Choose your domain

Pick a subdomain for your SFP Server. Examples:

* `flxbl.yourcompany.com`
* `codev.yourcompany.com`
* `devops.internal.yourcompany.com` (private/internal domains work too)

### Step 2: Create a DNS record

In your DNS provider (Cloudflare, Route 53, GoDaddy, etc.), create an **A record**:

| Field     | Value                                                   |
| --------- | ------------------------------------------------------- |
| **Type**  | A                                                       |
| **Name**  | `sfp` (or your chosen subdomain)                        |
| **Value** | Your server's IP address                                |
| **Proxy** | Off / DNS-only (if using Cloudflare, set to grey cloud) |
| **TTL**   | Auto or 300                                             |

### Step 3: Verify DNS resolves

```bash
dig sfp.yourcompany.com +short
# Should return your server IP
```

If using a private/internal domain, verify from a machine on the same network.

### Step 4: Choose your TLS mode

SFP Server uses **Caddy** as its reverse proxy. Caddy handles TLS termination and supports three modes:

| Mode                     | Best for                                    | What you need                   | What the server does                |
| ------------------------ | ------------------------------------------- | ------------------------------- | ----------------------------------- |
| **Bring Your Own Cert**  | Enterprise / private networks, corporate CA | PEM cert + key files            | Caddy loads your cert files         |
| **Let's Encrypt**        | Public-facing servers, quick evaluation     | Ports 80 + 443 open, public DNS | Caddy auto-obtains and renews certs |
| **Behind Load Balancer** | Existing infrastructure (ALB, F5, NGINX)    | LB handles TLS                  | Server runs HTTP internally         |

#### Option A: Bring Your Own Certificate (Recommended for enterprise)

If the server is on a **private network** or you manage certificates through your own PKI / corporate CA:

1. Obtain a TLS certificate and private key for your domain (from your internal CA or a commercial CA)
2. You need two files in **PEM format**:
   * Certificate (full chain recommended): `origin.pem`
   * Private key: `origin-key.pem`
3. Copy them to the server:

   ```bash
   scp origin.pem origin-key.pem admin@sfp.yourcompany.com:/tmp/
   ```
4. During `sfp server init`, the CLI will place them in the correct location automatically

> If you have `.crt` + `.key` files instead of `.pem`, they are the same format — just rename:
>
> ```bash
> cp your-domain.crt origin.pem
> cp your-domain.key origin-key.pem
> ```

**Advantages:** No inbound ports required. Works on private networks and air-gapped environments. Fits into existing corporate certificate management workflows.

#### Option B: Automatic TLS via Let's Encrypt

If the server is **internet-accessible** and you prefer zero-configuration TLS:

1. Ensure your domain's DNS resolves to the server's public IP (Step 2 above)
2. Open ports **80** and **443** inbound:

   ```bash
   sudo ufw allow 80/tcp    # ACME challenge
   sudo ufw allow 443/tcp   # HTTPS
   ```
3. No certificate files needed — Caddy obtains and renews certificates automatically

#### Option C: Behind a Load Balancer

If your existing infrastructure (AWS ALB, Azure Application Gateway, F5, NGINX) already handles TLS:

1. Configure your load balancer to terminate TLS on port 443
2. Forward traffic to the server on port 80 (HTTP)
3. Ensure the LB sets `X-Forwarded-Proto: https` and `X-Forwarded-For` headers
4. Pass the `--no-caddy` flag during `sfp server init`

### Firewall / Security Group

| Port     | Protocol | Purpose                                 | Required When                |
| -------- | -------- | --------------------------------------- | ---------------------------- |
| **443**  | TCP      | HTTPS — main application entry point    | Always (except Option C)     |
| **80**   | TCP      | Let's Encrypt ACME challenge            | Only for Option B (auto-TLS) |
| **8080** | TCP      | Hatchet Dashboard (workflow monitoring) | Restrict to admin IPs        |
| **4873** | TCP      | Verdaccio (private npm registry)        | Restrict to admin/CI IPs     |
| **3100** | TCP      | Supabase Studio (database management)   | Restrict to admin IPs        |

> Ports 8080, 4873, and 3100 are **IP-restricted** by Caddy. To configure the allowlist, set `ALLOWED_IPS` in your `.env` file (comma-separated IPs or CIDR ranges). These ports do not need to be open to all users.

### SSH Access

The `sfp server init` command can be run **remotely** from your local machine via SSH, or **locally** on the server itself. If running remotely:

* The server must be accessible via SSH from the machine running `sfp`
* The SSH user must have permission to run `docker` commands (e.g., member of the `docker` group, or `root`)

### Outbound Access (Server)

The server requires outbound internet access for:

* Docker image pulls (GitHub Container Registry, Docker Hub) — during init and updates
* GitHub API — for OAuth authentication and repository operations at runtime

If using **Option B (auto-TLS)**, the server also needs:

* Let's Encrypt ACME (ports 80/443 outbound)

> The Supabase CLI is downloaded once during init as part of the database migration container. This happens inside Docker and requires the server to reach `github.com`.

### What you'll provide to `sfp server init`

After completing this section, you should have:

| Value                          | Example                                       | Source           |
| ------------------------------ | --------------------------------------------- | ---------------- |
| **Domain**                     | `sfp.yourcompany.com`                         | Step 1           |
| **TLS mode**                   | `custom` or `letsencrypt`                     | Step 4           |
| **Cert files** (Option A only) | `origin.pem` + `origin-key.pem` on the server | Step 4, Option A |

***

## 3. GitHub OAuth Application

SFP uses GitHub as the authentication provider. You must create a **GitHub OAuth App** before installation.

### Steps

1. Go to **GitHub** → **Settings** → **Developer Settings** → **OAuth Apps** → **New OAuth App**
   * Direct link: <https://github.com/settings/developers>
2. Fill in the following:

   | Field                          | Value                                            |
   | ------------------------------ | ------------------------------------------------ |
   | **Application name**           | `SFP Server` (or any name you prefer)            |
   | **Homepage URL**               | `https://flxbl.yourcompany.com`                  |
   | **Authorization callback URL** | `https://flxbl.yourcompany.com/auth/v1/callback` |

   > Replace `flxbl.yourcompany.com` with your actual domain.
3. Click **Register application**
4. Note down the following credentials — you will need them during `sfp server init`:
   * **Client ID** — displayed on the app page
   * **Client Secret** — click "Generate a new client secret"

> **Important:** Store the Client Secret securely. You will not be able to view it again after navigating away.

### GitHub Enterprise Server

If you are using **GitHub Enterprise Server** (GHES) instead of github.com, the OAuth App is created at:

`https://<your-ghes-domain>/settings/developers`

The callback URL remains the same pattern: `https://flxbl.yourcompany.com/auth/v1/callback`

### Organization-Level OAuth Apps

If your GitHub organization has OAuth app restrictions enabled, an org admin may need to approve the app after creation:

1. Go to your GitHub Organization → **Settings** → **Third-party access** → **OAuth application policy**
2. Approve the `SFP Server` application

***

## 4. Admin IP Addresses

Collect the public (or private, if on a corporate network) IP addresses of anyone who needs access to administrative dashboards:

* **Hatchet Dashboard** (workflow monitoring) — port 8080
* **Supabase Studio** (database management) — port 3100
* **Verdaccio** (npm registry browser) — port 4873

Set these as the `ALLOWED_IPS` value in your `.env` file after init:

* Single IPs: `ALLOWED_IPS=203.0.113.10`
* Multiple IPs: `ALLOWED_IPS=203.0.113.10,198.51.100.5`
* CIDR ranges: `ALLOWED_IPS=10.0.0.0/8,172.16.0.0/12`

> After changing `ALLOWED_IPS`, restart Caddy with `docker compose restart caddy` for the change to take effect.

***

## 5. Secrets / Configuration

During `sfp server init`, you will be prompted interactively for the required secrets listed below. All database credentials (Postgres password, JWT secret, Supabase API keys) are **auto-generated** — you do not need to create them.

Prepare the following values in advance:

| Secret                       | Required | Description                                  |
| ---------------------------- | -------- | -------------------------------------------- |
| `GITHUB_OAUTH_CLIENT_ID`     | Yes      | From Step 3 above                            |
| `GITHUB_OAUTH_CLIENT_SECRET` | Yes      | From Step 3 above                            |
| `GITHUB_APP_ID`              | Optional | For GitHub App integration (repo operations) |
| `GITHUB_APP_PRIVATE_KEY`     | Optional | For GitHub App integration                   |
| `SLACK_APP_TOKEN`            | Optional | For Slack integration                        |
| `SLACK_SIGNING_SECRET`       | Optional | For Slack integration                        |
| `SLACK_BOT_TOKEN`            | Optional | For Slack integration                        |

> For repeatable or automated deployments, you can also provide these values via a JSON config file (`--config-file`) or as environment variables instead of interactive prompts.

***

## 6. Pre-Installation Checklist

Use this checklist to confirm readiness before running `sfp server init`:

* [ ] Server provisioned with adequate resources (see Section 1)
* [ ] Docker Engine 24+ and Docker Compose v2 installed on the server
* [ ] SSH access to the server (if running `sfp server init` remotely)
* [ ] `sfp` CLI installed on the machine that will run the init command
* [ ] Domain DNS record configured and resolving to server IP
* [ ] TLS certificate and key ready (if using bring-your-own-cert)
* [ ] Firewall ports open: 443 (always) + 80 (only if using auto-TLS)
* [ ] Server has outbound internet access (Docker image pulls, GitHub API)
* [ ] GitHub OAuth App created with correct callback URL
* [ ] GitHub OAuth Client ID and Client Secret noted
* [ ] Admin IP addresses collected for dashboard access (configured post-init via `ALLOWED_IPS`)

***

## 7. Installation

Installation follows three steps: **initialize → configure → start**. The init command scaffolds the configuration, you review and adjust it, then start the services.

### Step 1: Initialize

Set the Docker registry credentials and run `sfp server init`:

```bash
# Set registry credentials (required to pull private images)
export DOCKER_REGISTRY=ghcr.io
export DOCKER_REGISTRY_TOKEN=<your-registry-token>

# Remote installation (from your workstation via SSH):
sfp server init \
  --tenant my-company \
  --mode prod \
  --supabase-mode self-hosted \
  --base-dir /opt/sfp-server \
  --ssh-connection root@<your-server-ip> \
  --identity-file ~/.ssh/your-key.pem \
  --secrets-provider custom \
  --no-interactive \
  --force
```

```bash
# Or local installation (run directly on the server):
sfp server init \
  --tenant my-company \
  --mode prod \
  --supabase-mode self-hosted \
  --base-dir /opt/sfp-server \
  --secrets-provider custom \
  --no-interactive
```

> `--supabase-mode self-hosted` auto-generates all database credentials (Postgres password, JWT secret, API keys). You do not need a Supabase Cloud account.

This command will:

1. Validate prerequisites (Docker, Compose)
2. Generate database credentials and API keys
3. Create directory structure and Docker Compose configuration
4. Copy database migration files
5. Create a default admin user

Save the admin credentials printed at the end — they cannot be retrieved later.

### Step 2: Configure

> You **must** review and edit the generated `.env` file before starting services.

SSH into your server and edit the configuration:

```bash
ssh root@<your-server-ip>
cd /opt/sfp-server/tenants/my-company
```

Edit `.env` and verify/set these values:

| Variable     | Default                            | Action                                         |
| ------------ | ---------------------------------- | ---------------------------------------------- |
| `DOMAIN`     | `https://my-company.flxbl.io`      | **Change** to `https://flxbl.yourcompany.com`  |
| `IMAGE_FQDN` | `source.flxbl.io/flxbl/sfp-server` | **Change** to `ghcr.io/flxbl-io/sfp-server-rc` |
| `IMAGE_TAG`  | `latest`                           | **Change** to `development`                    |

```bash
# Quick sed commands:
sed -i 's|DOMAIN=.*|DOMAIN=https://flxbl.yourcompany.com|' .env
sed -i 's|IMAGE_FQDN=.*|IMAGE_FQDN=ghcr.io/flxbl-io/sfp-server-rc|' .env
sed -i 's|IMAGE_TAG=.*|IMAGE_TAG=development|' .env
```

#### Configure TLS

The default Caddyfile is configured for Cloudflare Origin CA certificates. If you are using **Let's Encrypt** (Option B from Section 2), replace the Caddyfile:

```bash
cat > config/Caddyfile << 'EOF'
{
    admin off
}

flxbl.yourcompany.com {
    log {
        format console
        level INFO
    }

    handle /auth/v1* {
        reverse_proxy http://supabase-kong:8000
    }

    handle /sso/* {
        reverse_proxy http://supabase-auth:9999
    }

    handle /rest/* {
        respond "Not Found" 404
    }

    handle {
        reverse_proxy server:3029
    }

    handle_errors {
        respond "Error: {err.status_code} {err.message}"
    }
}
EOF
```

> Replace `flxbl.yourcompany.com` with your actual domain. Caddy will automatically obtain and renew a Let's Encrypt certificate.

If using **Bring Your Own Certificate** (Option A), place your cert files and keep the default Caddyfile:

```bash
# Copy your certs into the tenant directory
cp /tmp/origin.pem certs/origin.pem
cp /tmp/origin-key.pem certs/origin-key.pem
```

### Step 3: Start

```bash
cd /opt/sfp-server/tenants/my-company
docker compose --profile supabase up -d
```

> The `--profile supabase` flag is required for self-hosted deployments. It starts the embedded Supabase stack (PostgreSQL, Auth, Kong, REST API, Studio). Without it, only the application services start and the server will fail to connect to the database.

The first start pulls all Docker images and runs database migrations. This takes 2–5 minutes depending on network speed.

If you see `external volume "..." not found` errors, create the missing volumes:

```bash
# Create all required volumes for your tenant
for suffix in hatchet-db-data hatchet-config hatchet-certs victoriametrics-data victorialogs-data verdaccio-config verdaccio-storage verdaccio-data supabase-db-config supabase-db-data caddy-data caddy-config; do
  docker volume create "my-company-$suffix" 2>/dev/null
done

# Retry
docker compose --profile supabase up -d
```

> Replace `my-company` with your tenant name in the volume names.

Monitor startup progress:

```bash
# Watch container status
docker compose --profile supabase ps

# Check server logs
docker compose logs -f server
```

All containers should reach `Up` or `Up (healthy)` status. The server may take 30–60 seconds to become healthy while it auto-generates the database encryption key and connects to dependent services.

***

## 8. Post-Installation

### Verify the deployment

```bash
# From your local machine:
curl https://flxbl.yourcompany.com/health

# Expected response:
# {"status": "healthy", "version": "x.x.x"}
```

Open `https://flxbl.yourcompany.com` in a browser — you should see the application login page.

### Set up the first user

The admin user created during init has authentication credentials but no **team membership**. You must authenticate the CLI and add the admin to a team before they can create projects in the UI:

```bash
# Authenticate the CLI with the admin credentials from Step 1
sfp server auth login \
  --admin \
  --email admin@my-company.local \
  --password "<password-from-step-1>" \
  --sfp-server-url https://flxbl.yourcompany.com

# Create a team and add the admin as owner
sfp server user add \
  --firstname "Admin" \
  --lastname "User" \
  --target-email admin@my-company.local \
  --team "my-company" \
  --role owner \
  --sfp-server-url https://flxbl.yourcompany.com \
  --email admin@my-company.local
```

> Replace `my-company` with your tenant name and use the admin email and password printed during Step 1. The `--team` flag creates the team automatically if it doesn't exist.

After this, sign in to the application with the admin credentials and you should see the project creation page.

### Available services

| Service           | URL                                  | Notes                                           |
| ----------------- | ------------------------------------ | ----------------------------------------------- |
| Application       | `https://flxbl.yourcompany.com`      | Main web interface and API                      |
| Hatchet Dashboard | `https://flxbl.yourcompany.com:8080` | Workflow monitoring (IP-restricted)             |
| Supabase Studio   | `https://flxbl.yourcompany.com:3100` | Database management (IP-restricted, basic auth) |
| Verdaccio         | `https://flxbl.yourcompany.com:4873` | npm registry (IP-restricted)                    |

### Ongoing operations

| Task                   | Command                                                                             |
| ---------------------- | ----------------------------------------------------------------------------------- |
| Start services         | `cd /opt/sfp-server/tenants/my-company && docker compose --profile supabase up -d`  |
| Stop services          | `docker compose --profile supabase down`                                            |
| View logs              | `docker compose logs -f server`                                                     |
| Update to latest image | `docker compose --profile supabase pull && docker compose --profile supabase up -d` |
| Restart a service      | `docker compose restart server`                                                     |

You can also use the sfp CLI for remote operations:

```bash
sfp server start --tenant my-company --ssh-connection root@<ip> --identity-file ~/.ssh/key.pem
sfp server logs --tenant my-company --ssh-connection root@<ip> --identity-file ~/.ssh/key.pem
```

***

## Architecture Overview

All services run within Docker on a single host. Only Caddy is exposed externally — all other services communicate over an internal Docker network.

```
Users / CI
   │
   ▼
┌──────────────────────────────────────────────────┐
│  Caddy (reverse proxy + TLS)  :443               │
│                                                  │
│  /auth/v1/*  ──► Kong ──► GoTrue (auth)          │
│  /sfp/*      ──► SFP Server (API)                │
│  /*          ──► SFP Server (Web App)            │
│                                                  │
│  :8080  ──► Hatchet Dashboard  (IP-restricted)   │
│  :3100  ──► Supabase Studio    (IP-restricted)   │
│  :4873  ──► Verdaccio          (IP-restricted)   │
└──────────────────────────────────────────────────┘
         │ internal Docker network
         ▼
┌──────────────────────────────────────────────────┐
│  PostgreSQL (Supabase)    — application data     │
│  PostgreSQL (Hatchet)     — workflow state        │
│  Hatchet Engine + Workers — workflow orchestration│
│  VictoriaMetrics          — metrics storage       │
│  VictoriaLogs             — log storage           │
│  Verdaccio                — npm package registry  │
└──────────────────────────────────────────────────┘
```

**Authentication:** The browser authenticates via GitHub OAuth through Caddy. The auth service (GoTrue) is never directly exposed — all auth requests are proxied through `https://your-domain.com/auth/v1/*`. No separate Supabase endpoint or port is required.

***

## Troubleshooting

### Common issues

| Symptom                                                                   | Cause                                                      | Fix                                                                                                  |
| ------------------------------------------------------------------------- | ---------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| `external volume "..." not found` during `docker compose up`              | Volumes not created by init                                | Run: `docker volume create <volume-name>` for each missing volume                                    |
| Caddy exits with `open /data/certs/origin.pem: no such file or directory` | Default Caddyfile expects Cloudflare Origin CA certs       | Replace Caddyfile with Let's Encrypt config (see Step 2 above)                                       |
| Server container `Up (healthy)` but `curl` returns 502                    | Caddy can't reach server — DNS or network mismatch         | Verify `DOMAIN` in `.env` matches the Caddyfile domain exactly                                       |
| Supabase containers not starting                                          | Missing `--profile supabase` flag                          | Use `docker compose --profile supabase up -d` instead of `docker compose up -d`                      |
| Server crashes with `VaultBootstrapService: fetch failed`                 | Supabase DB not running when server starts                 | Start with `--profile supabase`; if already running, restart server: `docker compose restart server` |
| GitHub OAuth callback URL mismatch                                        | URL in GitHub OAuth App settings doesn't match your domain | Must exactly match `https://<your-domain>/auth/v1/callback`                                          |
| DNS not resolving                                                         | Caddy cannot issue Let's Encrypt certificates              | Verify DNS A record: `dig your-domain.com +short`                                                    |
| Firewall blocking port 80                                                 | Let's Encrypt ACME challenge fails                         | Open port 80 inbound (only required for Let's Encrypt)                                               |
| Certificate format error                                                  | Certs must be PEM-encoded                                  | If you have `.crt` + `.key` files, rename to `.pem`                                                  |
