Running Apex Tests
The apextests trigger command allows you to independently execute Apex tests in your Salesforce org. While the validate command automatically runs tests per package during validation, this command gives you direct control over test execution with support for multiple test levels, code coverage validation, and output formats.
Primary Testing Patterns
sfp follows a package-centric testing approach where tests are organized and executed at the package or domain level, rather than running all org tests together. This aligns with how the validate command works and provides better isolation and faster feedback.
# Test a specific package (primary pattern)
sfp apextests trigger -o my-org -l RunAllTestsInPackage -n my-package
# Test all packages in a domain (recommended for domain validation)
sfp apextests trigger -o my-org -l RunAllTestsInDomain -r config/release-config.yaml
# Test multiple packages together
sfp apextests trigger -o my-org -l RunAllTestsInPackage -n package-a -n package-b
# Quick test during development
sfp apextests trigger -o my-org -l RunSpecifiedTests --specifiedtests MyTestTest Levels
RunAllTestsInPackage (Recommended)
Runs all tests within specified package(s). This is the primary testing pattern in sfp and matches how the validate command executes tests. Supports code coverage validation at both package and individual class levels.
# Single package
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n sales-core
# Multiple packages
sfp apextests trigger -o dev-org -l RunAllTestsInPackage \
-n sales-core \
-n sales-ui \
-n sales-integrationThis pattern matches how validate command executes tests - each package is tested independently with its own test classes. This provides:
Better test isolation and faster feedback
Package-level code coverage validation
Clear attribution of test failures to specific packages
Parallel test execution per package (when enabled)
RunAllTestsInDomain (Recommended for Domain Validation)
Runs tests for all packages defined in a domain from your release config. This is the recommended pattern for validating entire domains and matches how you would validate a domain for release.
sfp apextests trigger -o dev-org -l RunAllTestsInDomain \
-r config/release-config-sales.yamlThis executes tests for each package in the domain sequentially, providing comprehensive domain validation. Use this when:
Validating changes across a domain before release
Testing related packages together as a unit
Performing end-to-end domain validation
RunSpecifiedTests
Runs specific test classes or methods. Useful for rapid iteration during active development.
# Run specific test classes
sfp apextests trigger -o dev-org -l RunSpecifiedTests \
--specifiedtests AccountTest,ContactTest
# Run specific test methods
sfp apextests trigger -o dev-org -l RunSpecifiedTests \
--specifiedtests AccountTest.testCreate,ContactTest.testUpdateUse during development for quick feedback cycles when working on specific features.
RunApexTestSuite
Runs all tests in a test suite defined in your org.
sfp apextests trigger -o dev-org -l RunApexTestSuite \
--apextestsuite QuickTestsUseful for running pre-defined test groups or smoke test suites.
RunLocalTests
Runs all tests in your org except those from managed packages. This is the default test level in Salesforce but not the recommended pattern in sfp.
sfp apextests trigger -o dev-org -l RunLocalTestsNote: While this is the Salesforce default, sfp recommends package-level or domain-level testing for better isolation and faster feedback. Use this only when you specifically need to run all org tests together, such as for compliance requirements or full org validation.
RunAllTestsInOrg
Runs all tests in your org, including managed packages. Rarely used due to long execution time.
sfp apextests trigger -o dev-org -l RunAllTestsInOrgUse only for complete org validation scenarios.
Code Coverage Validation
Individual Class Coverage
Validates that each Apex class in the package meets the minimum coverage threshold. Every class must meet or exceed the specified percentage.
sfp apextests trigger -o dev-org -l RunAllTestsInPackage \
-n my-package \
-c \
-p 80Coverage threshold:
Default: 75%
Adjustable with
-pflagApplied per class, not as an average
Output includes:
List of all classes with their coverage percentages
Classes that meet the threshold
Classes that fail to meet the threshold
Overall package coverage percentage
Package Coverage
Validates that the overall package coverage meets the minimum threshold. The average coverage across all classes must meet or exceed the specified percentage.
sfp apextests trigger -o dev-org -l RunAllTestsInPackage \
-n my-package \
--validatepackagecoverage \
-p 75Coverage calculation:
Aggregates coverage across all classes in package
Calculated as: (total covered lines / total lines) * 100
Only classes with Apex code count toward coverage
Coverage vs No Coverage
Running tests without coverage flags still executes tests but doesn't fetch or validate coverage data:
# Run tests without coverage data
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n my-package
# Run tests with coverage fetching and validation
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n my-package -cNote: Fetching coverage data adds time to test execution, so only use it when needed.
Output Formats
Note: The dashboard output format is a new feature introduced in the November 2025 release of sfp-pro.
Raw Format (Default)
Standard Salesforce API output with JUnit XML and JSON results. This is the default format.
sfp apextests trigger -o dev-org -l RunLocalTestsGenerates:
.testresults/test-result-<testRunId>.json- Raw Salesforce test results.testresults/test-result-<testRunId>-junit.xml- JUnit XML format.testresults/test-result-<testRunId>-coverage.json- Coverage data (if coverage enabled).testresults/testresults.md- Markdown summary
Dashboard Format
Available in: sfp-pro November 2024 release and later
Structured JSON format optimized for dashboards, metrics systems, and reporting tools. Unlike the raw Salesforce API output, the dashboard format provides enriched, pre-processed data that's ready for consumption by external systems.
# Generate dashboard format
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n my-package \
--outputformat dashboard \
--environment devGenerates all raw format files plus:
.testresults/<testRunId>/dashboard.json- Structured test results.testresults/<testRunId>/testresults.md- Enhanced markdown summary.testresults/latest.json- Symlink to latest dashboard result
Dashboard JSON Schema
The dashboard.json file contains a comprehensive test execution report:
{
"environment": "dev",
"timestamp": "2025-11-24T10:30:00.000Z",
"duration": 125000,
"testExecutionTime": 120000,
"commandTime": 115000,
"repository": "https://github.com/myorg/myrepo",
"commitSha": "abc123def",
"branch": "main",
"summary": {
"totalTests": 150,
"passed": 145,
"failed": 5,
"skipped": 0,
"passingRate": 96.67,
"overallCoverage": 82.5,
"coveredLines": 8250,
"totalLines": 10000,
"outcome": "Failed"
},
"coverage": {
"overallCoverage": 82.5,
"totalLines": 10000,
"coveredLines": 8250,
"classes": [
{
"name": "AccountService",
"id": "01p...",
"coverage": 95.5,
"totalLines": 200,
"coveredLines": 191,
"status": "pass"
}
],
"uncoveredClasses": ["LegacyHelper"],
"belowThreshold": [
{
"name": "OldProcessor",
"coverage": 65.0,
"threshold": 75
}
]
},
"testCases": [
{
"id": "07M...",
"name": "AccountService.testCreateAccount",
"className": "AccountService",
"methodName": "testCreateAccount",
"time": 250,
"status": "passed"
}
],
"topFailingTests": [
{
"name": "ContactTest.testValidation",
"className": "ContactTest",
"methodName": "testValidation",
"failureMessage": "System.AssertException: Expected 5, but got 3"
}
],
"metadata": {
"testRunId": "707...",
"orgId": "00D...",
"username": "[email protected]",
"package": "sales-core",
"testLevel": "RunAllTestsInPackage"
}
}How Dashboard Output is Used
For Metrics and Observability:
# Run tests and extract metrics
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n my-package \
--outputformat dashboard --json > results.json
# Extract key metrics
PASSING_RATE=$(jq -r '.result.summary.passingRate' results.json)
COVERAGE=$(jq -r '.result.summary.overallCoverage' results.json)
# Push to your metrics backend
curl -X POST https://metrics.example.com/api/tests \
-d @.testresults/latest.jsonFor Test History Tracking:
# The latest.json symlink always points to most recent result
jq -r '.summary.passingRate' .testresults/latest.json
# Track coverage trends
jq -r '.coverage.belowThreshold[] | "\(.name): \(.coverage)%"' \
.testresults/latest.jsonDashboard vs Raw Format
Output
Salesforce API response
Processed, enriched data
Structure
Flat, verbose
Hierarchical, organized
File Location
.testresults/ root
.testresults/<testRunId>/
Coverage
Separate file
Integrated in JSON
Latest Symlink
No
Yes (latest.json)
Metadata
Limited
Environment, repo, commit
Use Case
Salesforce tooling
External systems, dashboards
Exit on Failure
Yes (exit code 1)
No (exit code 0)
Non-Blocking Dashboard Mode
In dashboard mode, test failures don't cause the command to exit with error code 1, allowing you to collect test results even when tests fail:
# Command succeeds even if tests fail
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n my-package \
--outputformat dashboard
echo $? # Always 0 in dashboard modeThis is useful for:
Collecting metrics regardless of test outcome
Generating reports without blocking pipelines
Archiving test history across passing and failing runs
Trend analysis and test reliability tracking
Both Format
Available in: sfp-pro November 2025 release and later
Generates both raw and dashboard formats in a single execution.
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n my-package \
--outputformat both \
--environment ciWhen to use:
Maintaining compatibility while adopting dashboard format
Comprehensive test result archiving
Output Directory Structure
After running tests, sfp creates a .testresults directory:
.testresults/
├── test-result-<testRunId>-junit.xml # JUnit XML format
├── test-result-<testRunId>.json # Raw Salesforce test results
├── test-result-<testRunId>-coverage.json # Code coverage data
├── testresults.md # Markdown summary
├── <testRunId>/ # Dashboard format directory
│ ├── dashboard.json # Structured test results
│ └── testresults.md # Enhanced markdown summary
└── latest.json # Symlink to latest dashboard resultTroubleshooting
Tests Timeout
Control how long the command waits for tests to complete:
# Wait up to 120 minutes
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n my-package -w 120
# Wait indefinitely (no timeout) - useful for very large test suites
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n my-package -w 0
# Omitting the flag also waits indefinitely
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n my-packageWait time behavior:
Omit
-wflag: Wait indefinitely (no timeout)-w 0: Wait indefinitely (no timeout)-w <minutes>: Wait up to specified minutes before timing out
For most scenarios, omitting the wait time or using 0 is recommended to avoid premature timeouts on large test suites.
Parallel vs Serial Execution
Some test classes interfere with each other when run in parallel. Configure serial execution in your package descriptor:
{
"path": "src/my-package",
"package": "my-package",
"testSynchronous": true
}Or use the synchronous flag (if supported):
sfp apextests trigger -o dev-org -l RunAllTestsInPackage \
-n my-package -sCoverage Validation Failures
See which classes failed coverage requirements:
sfp apextests trigger -o dev-org -l RunAllTestsInPackage \
-n my-package -c --loglevel debugThe debug output shows:
Each class and its coverage percentage
Which classes passed/failed threshold
Overall package coverage
Tests Not Found
If no tests are executed:
Check that test classes exist in the package:
# Verify package contents sfp build -d mydevhub -n my-package --loglevel debugEnsure test classes follow naming conventions:
Class name ends with
TestMethods are annotated with
@isTest
Verify test classes are in the correct package directory
Mixed Results with Retries
sfp automatically retries failed tests in serial mode. This is normal behavior to handle flaky tests that fail in parallel execution:
First run: Tests execute in parallel
If failures occur: Failed tests retry in serial mode
Final results: Combines both runs, removes duplicates
Additional Options
Wait Time Control
Control test execution timeout behavior:
# Wait indefinitely (recommended for large test suites)
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n my-package -w 0
# Wait up to 120 minutes
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n my-package -w 120
# Omitting -w also waits indefinitely
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n my-packageOptions:
Omit
-w: Wait indefinitely-w 0: Wait indefinitely-w <minutes>: Wait specified minutes before timeout
Specifying API Version
Override the API version for the test run:
sfp apextests trigger -o dev-org -l RunAllTestsInPackage -n my-package --apiversion 60.0Git Metadata
Include git information in test results:
sfp apextests trigger -o dev-org -l RunLocalTests \
--commitsha abc123def \
--repourl https://github.com/myorg/myrepoThis metadata appears in:
Dashboard JSON output
Markdown summaries
Test reports
Custom Environment Name
Specify environment name for dashboard format:
sfp apextests trigger -o dev-org -l RunLocalTests \
--outputformat dashboard \
--environment "dev-feature-branch"Defaults to the target org alias if not specified.
Last updated