AI Assisted Architecture Analysis

sfp-pro
sfp (community)

Availability

From

October 25

Not Available

The AI-powered review functionality provides intelligent architecture and code quality analysis during pull request reviews. This feature automatically analyzes changed files using advanced language models to provide contextual insights about architectural patterns, Flxbl framework compliance, and potential improvements.

Overview

The architecture analysis performs real-time analysis of pull request changes to:

  • Analyze architectural patterns and design consistency

  • Identify alignment with Flxbl framework best practices

  • Suggest improvements based on changed files context

  • Provide severity-based insights (info, warning, concern)

  • Generate actionable recommendations

How It Works

The AI assisted architecture analyzer integrates into the project:analyze command and:

  1. Detects PR Context: Automatically identifies when running in a pull request environment

  2. Analyzes Changed Files: Focuses analysis on modified files only (up to 10 files for token optimization)

  3. Applies AI Analysis: Uses configured AI provider to analyze architectural patterns

  4. Reports Findings: Generates structured insights without failing the build (informational only)

  5. Creates GitHub Checks: Posts results as GitHub check annotations when running in CI

Prerequisites

This feature is exclusive to sfp-pro and not available in the community edition.

For complete setup instructions, see Configuring LLM Providers.

Configuration

The architecture analyzer is configured through a YAML configuration file at config/ai-architecture.yaml:

Minimal Configuration

For quick setup, create a minimal configuration:

The linter will auto-detect available AI providers and use sensible defaults.

AI Provider Setup

For detailed provider configuration, see Configuring LLM Providers.

Quick Reference

Provider
Default Model
Setup

Anthropic (Recommended)

claude-sonnet-4-5-20250929

export ANTHROPIC_API_KEY="sk-ant-xxx"

OpenAI

gpt-4o

export OPENAI_API_KEY="sk-xxx"

GitHub Copilot

gpt-4o

export COPILOT_TOKEN="ghu_xxx"

Amazon Bedrock

anthropic.claude-sonnet-4-5-20250929-v1:0

export AWS_BEARER_TOKEN_BEDROCK + AWS_REGION

The linter auto-detects providers in this priority:

  1. Environment variables (ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.)

  2. Configuration in ai-architecture.yaml

Usage in Pull Requests

Automatic PR Detection

When running in GitHub Actions or with PR environment variables:

Manual Changed Files Specification

For local testing or custom CI environments:

Understanding Results

The AI linter provides structured insights without failing builds:

Insight Types

  • Pattern: Architectural patterns observed or missing

  • Concern: Potential issues requiring attention

  • Suggestion: Improvement recommendations

  • Alignment: Framework compliance observations

Severity Levels

  • Info: Informational observations

  • Warning: Areas needing attention

  • Concern: Significant architectural considerations

Sample Output

Integration with CI/CD

AI linter results are informational only and never fail the build. This ensures PR checks remain stable even if AI providers are unavailable.

GitHub Actions Integration

Handling Rate Limits

The linter gracefully handles API limitations:

  • Rate Limits: Skips analysis with informational message

  • Timeouts: 60-second timeout protection

  • Token Limits: Analyzes up to 10 files, content limited to 5KB per file

  • Failures: Never blocks PR merge (informational only)

Best Practices

1. Configure Focus Areas

Tailor analysis to your team's priorities:

2. Add Context Files

Provide architectural documentation for better analysis:

3. Use with Other Linters

Combine with other analysis tools for comprehensive coverage:

4. Token Optimization

For large PRs, the linter automatically:

  • Limits to 10 most relevant files

  • Truncates file content to 5KB

  • Focuses on text-based source files

Troubleshooting

Analysis Skipped

Common reasons and solutions:

  1. Not Enabled: Set enabled: true in config/ai-architecture.yaml

  2. No Provider: Configure API keys via environment variables (see Configuring LLM Providers)

  3. Rate Limited: Wait for rate limit reset or use different provider

  4. No Changed Files: Ensure PR context is properly detected

Debugging

Enable debug logging for detailed information:

This shows:

  • Provider detection process

  • Changed files identified

  • API calls and responses

  • Error details if analysis fails

Limitations

  1. Binary Files: Skips non-text files

  2. Build Impact: Never fails builds (informational only)

  3. Language Support: Best for Apex, JavaScript, TypeScript, XML

Last updated