# AI Assisted Architecture Analysis

|              | sfp-pro    | sfp (community) |
| ------------ | ---------- | --------------- |
| Availability | ✅          | ❌               |
| From         | October 25 | Not Available   |

\
The AI-powered review functionality provides intelligent architecture and code quality analysis during pull request reviews. This feature automatically analyzes changed files using advanced language models to provide contextual insights about architectural patterns, Flxbl framework compliance, and potential improvements.

### Overview

The architecture analysis performs real-time analysis of pull request changes to:

* Analyze architectural patterns and design consistency
* Identify alignment with Flxbl framework best practices
* Suggest improvements based on changed files context
* Provide severity-based insights (info, warning, concern)
* Generate actionable recommendations

### How It Works

The AI assisted architecture analyzer integrates into the `project:analyze` command and:

1. **Detects PR Context**: Automatically identifies when running in a pull request environment
2. **Analyzes Changed Files**: Focuses analysis on modified files only (up to 10 files for token optimization)
3. **Applies AI Analysis**: Uses configured AI provider to analyze architectural patterns
4. **Reports Findings**: Generates structured insights without failing the build (informational only)
5. **Creates GitHub Checks**: Posts results as GitHub check annotations when running in CI

### Prerequisites

{% hint style="info" %}
This feature is exclusive to sfp-pro and not available in the community edition.
{% endhint %}

For complete setup instructions, see [Configuring LLM Providers](https://docs.flxbl.io/flxbl/sfp/getting-started/configuring-llm-providers).

### Configuration

The architecture analyzer is configured through a YAML configuration file. The file is checked in this order:

1. `config/ai-assist.yaml` — preferred name for new projects
2. `config/ai-architecture.yaml` — legacy name, still fully supported

```yaml
# config/ai-assist.yaml
# (also accepted as config/ai-architecture.yaml for legacy projects)

enabled: true

# AI provider to use. Auto-detected from environment variables if omitted.
# Valid values: anthropic, openai, google, amazon-bedrock, github-copilot
provider: anthropic

# Optional: specific model to use. Defaults to the provider's recommended model.
model: claude-sonnet-4-5

# Timeout in milliseconds. Minimum 600000 (10 minutes) recommended.
timeout: 600000

# Salesforce/FLXBL architectural patterns the AI should recognise and evaluate.
patterns:
  - "Service Layer Pattern"
  - "Repository Pattern"
  - "Trigger Handler Pattern"
  - "Selector Pattern"
  - "Domain Layer Pattern"
  - "Unit of Work Pattern"

# Guiding principles the AI should use when assessing the changes.
principles:
  - "Separation of Concerns"
  - "Single Responsibility Principle"
  - "Bulkification of DML and SOQL"
  - "Security (CRUD/FLS enforcement)"
  - "Governor Limit Awareness"

# Aspects of the code the AI should focus its analysis on.
focusAreas:
  - "Error Handling and Logging"
  - "Governor Limits"
  - "Test Coverage and Quality"
  - "Security and Sharing Model"
  - "Trigger Best Practices"

# Additional repository files to include as context for the AI.
# Paths are relative to the repository root.
contextFiles:
  - "docs/architecture.md"
  - "docs/patterns.md"
  - "README.md"

# Change significance thresholds (see below).
changeSignificance:
  excludedMetadataTypes:
    - "CustomLabel"
    - "Translation"
    - "StaticResource"
  fileTypeThresholds:
    apex:
      lines: 50
      files: 3
    flows:
      lines: 100
      files: 2
    lwc:
      lines: 80
      files: 3
    default:
      lines: 200
      files: 5
  ignoredFilePatterns:
    - "**/*.md"
    - "**/test/**"
    - "**/__tests__/**"
```

#### Minimal Configuration

For quick setup, create a minimal configuration:

```yaml
enabled: true
```

The linter will auto-detect available AI providers and use sensible defaults.

### Configuration Field Reference

| Field                                      | Type       | Default          | Description                                                                |
| ------------------------------------------ | ---------- | ---------------- | -------------------------------------------------------------------------- |
| `enabled`                                  | `boolean`  | `true`           | Whether the architecture linter is active (still gated by `analyze.yaml`). |
| `provider`                                 | `string`   | auto-detect      | AI provider to use.                                                        |
| `model`                                    | `string`   | provider default | Specific model to use.                                                     |
| `timeout`                                  | `number`   | `600000`         | Milliseconds before the AI call is abandoned.                              |
| `patterns`                                 | `string[]` | `[]`             | Architectural pattern names for the AI to recognize.                       |
| `principles`                               | `string[]` | `[]`             | Guiding principles for the AI's evaluation.                                |
| `focusAreas`                               | `string[]` | `[]`             | Specific areas of concern for the AI to focus on.                          |
| `contextFiles`                             | `string[]` | `[]`             | Repository files to include as context in the AI prompt.                   |
| `changeSignificance`                       | `object`   | —                | Thresholds for skipping trivial PRs.                                       |
| `changeSignificance.excludedMetadataTypes` | `string[]` | `[]`             | Metadata types never considered significant.                               |
| `changeSignificance.fileTypeThresholds`    | `object`   | —                | Per-type lines/files thresholds.                                           |
| `changeSignificance.ignoredFilePatterns`   | `string[]` | `[]`             | Glob patterns for files excluded from significance calculation.            |

### Change Significance Filtering

When `changeSignificanceEnabled` is set to `true` in `config/analyze.yaml` (or per branch rule), PRs below the configured thresholds are skipped — no AI call is made. This saves API costs for trivial changes.

A PR is considered "significant" if it meets **either** the lines or files threshold for any file type. Metadata types listed in `excludedMetadataTypes` are never considered significant, and files matching `ignoredFilePatterns` are excluded from the calculation.

```yaml
# config/ai-assist.yaml
changeSignificance:
  excludedMetadataTypes:
    - "CustomLabel"
    - "Translation"
    - "StaticResource"
  fileTypeThresholds:
    apex:
      lines: 50      # Changed lines of Apex
      files: 3       # Number of Apex files changed
    flows:
      lines: 100
      files: 2
    lwc:
      lines: 80
      files: 3
    default:
      lines: 200     # Fallback for any other file type
      files: 5
  ignoredFilePatterns:
    - "**/*.md"
    - "**/test/**"
    - "**/__tests__/**"
```

### AI Provider Setup

For detailed provider configuration, see [Configuring LLM Providers](https://docs.flxbl.io/flxbl/sfp/getting-started/configuring-llm-providers).

#### Quick Reference

| Provider                | Environment Variable                               | Setup                                            |
| ----------------------- | -------------------------------------------------- | ------------------------------------------------ |
| Anthropic (Recommended) | `ANTHROPIC_API_KEY`                                | `export ANTHROPIC_API_KEY="sk-ant-xxx"`          |
| OpenAI                  | `OPENAI_API_KEY`                                   | `export OPENAI_API_KEY="sk-xxx"`                 |
| Google                  | `GOOGLE_API_KEY` or `GOOGLE_GENERATIVE_AI_API_KEY` | `export GOOGLE_API_KEY="xxx"`                    |
| Amazon Bedrock          | AWS credentials in environment                     | `export AWS_BEARER_TOKEN_BEDROCK` + `AWS_REGION` |
| GitHub Copilot          | GitHub token (auto-detected in CI)                 | `export COPILOT_TOKEN="ghu_xxx"`                 |

The linter auto-detects providers in this priority:

1. Environment variables (`ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GOOGLE_API_KEY`, etc.)
2. Configuration in `ai-assist.yaml` (or `ai-architecture.yaml`)

### Usage in Pull Requests

#### Automatic PR Detection

When running in GitHub Actions or with PR environment variables:

```bash
# Automatically detects PR context and analyzes only changed files
sfp project:analyze

# Explicitly exclude AI linter if needed
sfp project:analyze --exclude-linters architecture
```

#### Manual Changed Files Specification

For local testing or custom CI environments:

```bash
# Manually specify changed files
sfp project:analyze --changed-files "src/classes/MyClass.cls,src/lwc/myComponent/myComponent.js"
```

### Understanding Results

The AI linter provides structured insights without failing builds:

#### Insight Types

* **Pattern**: Architectural patterns observed or missing
* **Concern**: Potential issues requiring attention
* **Suggestion**: Improvement recommendations
* **Alignment**: Framework compliance observations

#### Severity Levels

* **Info**: Informational observations
* **Warning**: Areas needing attention
* **Concern**: Significant architectural considerations

#### Sample Output

```markdown
📐 Architecture Analysis Results
════════════════════════════════

✅ Analysis Complete (AI-powered by anthropic/claude-sonnet-4-5-20250929)

## Summary
Analyzed 5 changed files focusing on architectural patterns and Flxbl compliance.

## Key Insights

### ⚠️ Service Layer Pattern (Warning)
File: src/classes/AccountController.cls
Description: Direct SOQL queries in controller violates service layer pattern.
Consider moving data access logic to a dedicated service class.

### ℹ️ Dependency Management (Info)
File: src/classes/OrderService.cls
Description: Good use of dependency injection pattern for testability.
This aligns well with Flxbl framework principles.

### ⚠️ Error Handling (Concern)
File: src/classes/PaymentProcessor.cls:45
Description: Missing comprehensive error handling for external callouts.
Implement try-catch blocks with proper logging and user feedback.

## Recommendations
1. Extract data access logic to service layer classes
2. Implement centralized error handling strategy
3. Consider adding unit tests for new service methods
4. Document architectural decisions in ARCHITECTURE.md
```

### Integration with CI/CD

{% hint style="info" %}
AI linter results are informational only and never fail the build. This ensures PR checks remain stable even if AI providers are unavailable.
{% endhint %}

#### GitHub Actions Integration

```yaml
- name: Run Project Analysis with AI Linter
  run: |
    sfp project:analyze --output-format github
  env:
    ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
    # GitHub context automatically detected
```

#### Handling Rate Limits

The linter gracefully handles API limitations:

* **Rate Limits**: Skips analysis with informational message
* **Timeouts**: 60-second timeout protection
* **Token Limits**: Analyzes up to 10 files, content limited to 5KB per file
* **Failures**: Never blocks PR merge (informational only)

### Best Practices

#### 1. Configure Focus Areas

Tailor analysis to your team's priorities:

```yaml
focusAreas:
  - security        # For compliance-critical projects
  - performance     # For high-volume applications
  - maintainability # For long-term projects
```

#### 2. Add Context Files

Provide architectural documentation for better analysis:

```yaml
contextFiles:
  - ARCHITECTURE.md
  - docs/coding-standards.md
  - docs/patterns.md
```

#### 3. Use with Other Linters

Combine with other analysis tools for comprehensive coverage:

```bash
# Run all linters including AI analysis
sfp project:analyze --fail-on duplicates,compliance

# AI linter provides insights, others enforce rules
```

#### 4. Token Optimization

For large PRs, the linter automatically:

* Limits to 10 most relevant files
* Truncates file content to 5KB
* Focuses on text-based source files

### Troubleshooting

#### Analysis Skipped

Common reasons and solutions:

1. **Not Enabled**: Set `enabled: true` in `config/ai-assist.yaml` (or `config/ai-architecture.yaml`)
2. **No Provider**: Configure API keys via environment variables (see [Configuring LLM Providers](https://docs.flxbl.io/flxbl/sfp/getting-started/configuring-llm-providers))
3. **Rate Limited**: Wait for rate limit reset or use different provider
4. **No Changed Files**: Ensure PR context is properly detected

#### Debugging

Enable debug logging for detailed information:

```bash
sfp project:analyze --loglevel debug
```

This shows:

* Provider detection process
* Changed files identified
* API calls and responses
* Error details if analysis fails

### Limitations

1. **Binary Files**: Skips non-text files
2. **Build Impact**: Never fails builds (informational only)
3. **Language Support**: Best for Apex, JavaScript, TypeScript, XML
