Configuring LLM Providers

sfp-pro
sfp (community)

Availability

N/A

From

October 25

Features

PR Linter, Reports, Error Analysis

This guide covers the setup and configuration of Large Language Model (LLM) providers for AI-powered features in sfp:

Prerequisites

Installation Methods

Supported LLM Providers

sfp currently supports the following LLM providers through OpenCode:

Provider
Status
Recommended
Best For

Anthropic (Claude)

✅ Fully Supported

⭐ Yes

Best overall performance, Flxbl framework understanding

OpenAI

✅ Fully Supported

Yes

Wide model selection, good performance

Amazon Bedrock

✅ Fully Supported

Yes

Enterprise environments with AWS infrastructure

GitHub Copilot

✅ Fully Supported

Yes

Teams with existing Copilot subscriptions, no extra cost

Provider Configuration

Anthropic's Claude models provide the best understanding of Salesforce and Flxbl framework patterns. The default model used is claude-sonnet-4-5-20250929 which offers optimal balance between performance and cost.

Setup

Step 1: Environment Variable

Step 2: Configuration File Create or edit config/ai-architecture.yaml:

Getting an Anthropic API Key

  1. Sign up or log in to your account

  2. Navigate to API Keys section

  3. Create a new API key for sfp usage

  4. Copy the key (starts with sk-ant-)

Claude Models Available:

  • claude-sonnet-4-5-20250929 - Recommended, best balance (default)

  • claude-opus-4-0-20250514 - Most capable, higher cost

OpenAI

OpenAI provides access to GPT models with good code analysis capabilities.

Setup

Step 1: Environment Variable

Step 2: Configuration File

Getting an OpenAI API Key

  1. Sign up or log in

  2. Go to API Keys section

  3. Create a new secret key

  4. Copy the key (starts with sk-)

Amazon Bedrock

Amazon Bedrock is ideal for enterprise environments already using AWS infrastructure. It provides access to Claude models through AWS.

Setup

Step 1: AWS Profile

STEP 3: Configuration File

Regional Considerations

Bedrock automatically handles model prefixes based on your AWS region:

  • US Regions: Models may require us. prefix

  • EU Regions: Models may require eu. prefix

  • AP Regions: Models may require apac. prefix

The OpenCode SDK handles this automatically based on your AWS_REGION.

GitHub Copilot

GitHub Copilot can be used if you have an active subscription with model access enabled.

Setup

Prerequisites:

  • Active GitHub Copilot subscription (Individual, Business, or Enterprise)

  • Models must be enabled in your GitHub Copilot settings

  • Visit GitHub Copilot Features to enable model access

Setup Methods

Method 1: Generate Token Using Script (Recommended)

sfp includes a helper script to generate Copilot tokens via the GitHub device flow:

The script will:

  1. Request a device code from GitHub

  2. Display a URL and verification code

  3. Open your browser automatically (on supported systems)

  4. Poll for authorization completion

  5. Output the OAuth token (ghu_ prefixed)

After the script completes, set the token:

Method 2: Environment Variable (CI/CD)

For CI/CD pipelines, set the COPILOT_TOKEN environment variable:

In GitHub Actions:

How Token Exchange Works

sfp automatically handles the OAuth token exchange process:

  1. OAuth Token (ghu_ prefix): The token you obtain from device flow authentication

  2. API Token Exchange: sfp automatically exchanges the OAuth token for a Copilot API token via GitHub's internal API

  3. Transparent Process: This exchange happens automatically when you use --provider github-copilot

Configuration File

Available Models

GitHub Copilot provides access to various models. The default is claude-sonnet-4.5:

Model
Description
Notes

claude-sonnet-4.5

Claude Sonnet 4.5

Default, recommended

gpt-4.1

GPT-4.1

Alternative option

Model Naming: GitHub Copilot uses simplified model names without date suffixes (e.g., claude-sonnet-4.5 instead of claude-sonnet-4-20250514).

Configuration File Reference

The AI features are configured through config/ai-assist.yaml in your project root:

Testing Provider Configuration

The test command performs a complete health check:

  • Authentication: Verifies credentials are available

  • Connectivity: Confirms the provider endpoint is reachable

  • Response: Validates the model returns a valid response

Environment Variables Reference

This command performs a simple inference test to verify:

  • Authentication is configured correctly

  • The provider is accessible

  • Model inference is working

  • Response time and performance

Usage Priority

When multiple authentication methods are available, sfp uses the following priority:

  1. Environment Variables - Highest priority, recommended for CI/CD

  2. Configuration File - From config/ai-assist.yaml

Troubleshooting

Provider Not Available

AWS Bedrock Specific Issues

Both Environment Variables Required

Authentication Failed

  • Verify both AWS_BEARER_TOKEN_BEDROCK and AWS_REGION are set

  • Check that your bearer token is valid and not expired

  • Ensure your AWS account has access to Claude models in Bedrock

API Rate Limits

If you encounter rate limits:

Model Not Found

Ensure you're using the correct model identifier for your provider:

Last updated