Configuring LLM Providers
Availability
✅
N/A
From
October 25
Features
PR Linter, Reports, Error Analysis
This guide covers the setup and configuration of Large Language Model (LLM) providers for AI-powered features in sfp:
AI-Powered PR Linter - sfp-pro only
AI Assisted Insight Reports - Available in both sfp-pro
AI-Assisted Error Analysis - Intelligent validation error analysis
Prerequisites
Installation Methods
Supported LLM Providers
sfp currently supports the following LLM providers through OpenCode:
Anthropic (Claude)
✅ Fully Supported
⭐ Yes
Best overall performance, Flxbl framework understanding
OpenAI
✅ Fully Supported
Yes
Wide model selection, good performance
Amazon Bedrock
✅ Fully Supported
Yes
Enterprise environments with AWS infrastructure
GitHub Copilot
✅ Fully Supported
Yes
Teams with existing Copilot subscriptions, no extra cost
Provider Configuration
Anthropic (Claude) - Recommended
Anthropic's Claude models provide the best understanding of Salesforce and Flxbl framework patterns. The default model used is claude-sonnet-4-5-20250929 which offers optimal balance between performance and cost.
Setup
Step 1: Environment Variable
Step 2: Configuration File Create or edit config/ai-architecture.yaml:
Getting an Anthropic API Key
Visit console.anthropic.com
Sign up or log in to your account
Navigate to API Keys section
Create a new API key for sfp usage
Copy the key (starts with
sk-ant-)
OpenAI
OpenAI provides access to GPT models with good code analysis capabilities.
Setup
Step 1: Environment Variable
Step 2: Configuration File
Getting an OpenAI API Key
Visit platform.openai.com
Sign up or log in
Go to API Keys section
Create a new secret key
Copy the key (starts with
sk-)
Amazon Bedrock
Amazon Bedrock is ideal for enterprise environments already using AWS infrastructure. It provides access to Claude models through AWS.
Setup
Step 1: AWS Profile
STEP 3: Configuration File
Important: AWS Bedrock requires both AWS_BEARER_TOKEN_BEDROCK and AWS_REGION environment variables to be set. Authentication will fail if either is missing.
Bedrock Model Access: Ensure your AWS account has access to the Claude models in Bedrock. You may need to request access through the AWS Console under Bedrock > Model access.
Regional Considerations
Bedrock automatically handles model prefixes based on your AWS region:
US Regions: Models may require
us.prefixEU Regions: Models may require
eu.prefixAP Regions: Models may require
apac.prefix
The OpenCode SDK handles this automatically based on your AWS_REGION.
GitHub Copilot
GitHub Copilot can be used if you have an active subscription with model access enabled.
Setup
Setup Methods
Method 1: Generate Token Using Script (Recommended)
sfp includes a helper script to generate Copilot tokens via the GitHub device flow:
The script will:
Request a device code from GitHub
Display a URL and verification code
Open your browser automatically (on supported systems)
Poll for authorization completion
Output the OAuth token (
ghu_prefixed)
After the script completes, set the token:
Method 2: Environment Variable (CI/CD)
For CI/CD pipelines, set the COPILOT_TOKEN environment variable:
In GitHub Actions:
Important: Use COPILOT_TOKEN instead of GITHUB_TOKEN in CI/CD environments. The GITHUB_TOKEN is automatically set by GitHub Actions for repository operations and may conflict with Copilot authentication.
How Token Exchange Works
sfp automatically handles the OAuth token exchange process:
OAuth Token (
ghu_prefix): The token you obtain from device flow authenticationAPI Token Exchange: sfp automatically exchanges the OAuth token for a Copilot API token via GitHub's internal API
Transparent Process: This exchange happens automatically when you use
--provider github-copilot
Configuration File
Available Models
GitHub Copilot provides access to various models. The default is claude-sonnet-4.5:
claude-sonnet-4.5
Claude Sonnet 4.5
Default, recommended
gpt-4.1
GPT-4.1
Alternative option
Configuration File Reference
The AI features are configured through config/ai-assist.yaml in your project root:
Testing Provider Configuration
The test command performs a complete health check:
Authentication: Verifies credentials are available
Connectivity: Confirms the provider endpoint is reachable
Response: Validates the model returns a valid response
Environment Variables Reference
This command performs a simple inference test to verify:
Authentication is configured correctly
The provider is accessible
Model inference is working
Response time and performance
Usage Priority
When multiple authentication methods are available, sfp uses the following priority:
Environment Variables - Highest priority, recommended for CI/CD
Configuration File - From
config/ai-assist.yaml
Troubleshooting
Provider Not Available
AWS Bedrock Specific Issues
Both Environment Variables Required
Authentication Failed
Verify both
AWS_BEARER_TOKEN_BEDROCKandAWS_REGIONare setCheck that your bearer token is valid and not expired
Ensure your AWS account has access to Claude models in Bedrock
API Rate Limits
If you encounter rate limits:
Anthropic: Check your usage at console.anthropic.com
OpenAI: Monitor at platform.openai.com/usage
Bedrock: Check AWS CloudWatch metrics
Model Not Found
Ensure you're using the correct model identifier for your provider:
Last updated