Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
sfp pro server represents a significant evolution of sfp, transitioning from distributed command-line tooling to an enterprise-grade server architecture. The system provides a comprehensive REST API that enables running automation beyond the context of CI/CD system while maintaining strict security boundaries and operational isolation.
The architecture of sfp pro server is built upon three fundamental principles that guide every aspect of its design:
Complete Instance Isolation: Each organization operates in its own dedicated compute instance with its own resources, ensuring complete separation of operations, data, and processing. This isolation isn't merely logical—it's physical separation at the infrastructure level.
Ephemeral Processing Model: All operations execute in single-use worker processes that maintain no state between executions. Each worker initializes with a clean environment, receives just-in-time credentials, and terminates after task completion, ensuring complete isolation between operations.
Real-time State Management: The system maintains centralized state management through Supabase, enabling real-time visibility into operations while ensuring state consistency and security. This replaces traditional Git-based state tracking with a more robust, real-time approach.
The sfp pro server implements a layered service architecture that exposes capabilities through well-defined API endpoints:
The authentication system provides OAuth-based authentication with support for both interactive users and application tokens. Key capabilities include:
OAuth callback handling for social authentication
Application token management for CI/CD systems
Role-based access control
Session management
The system implements a sophisticated task queue with priority levels:
Critical tasks for time-sensitive operations
Normal tasks for standard development operations
Batch tasks for resource-intensive processes
Tasks can be submitted with specific scheduling requirements:
Immediate execution
Scheduled execution at a future time
Recurring execution with configurable intervals
The system provides a sophisticated document store with collection support:
Hierarchical document organization
Version tracking for all documents
Optimistic concurrency control
Cross-collection querying capabilities
A simple key-value store provides fast access to operational data:
Atomic operations
Versioned updates
Flexible value storage
The system implements comprehensive Salesforce credential management:
Production org registration and validation
Sandbox authentication handling
Credential security through ephemeral access
Connection testing capabilities
A flexible webhook system enables integration with external systems:
Multiple provider support
Configurable retry policies
Filtered event delivery
Real-time delivery tracking
The API is organized into logical service groups, each handling specific aspects of the system:
Each API group implements specific security controls and maintains its own service boundaries while operating within the overall system architecture.
The complete system architecture implements multiple layers of functionality and security:
Each layer serves a specific purpose in the architecture:
The Edge Layer handles all external communication, implementing TLS termination, request routing, and basic security controls. This layer ensures that all communication is encrypted and properly authenticated before reaching internal services.
The Application Layer contains the core services that manage authentication, task orchestration, and real-time updates. These services operate in isolated networks with controlled access to the processing and storage layers.
The Processing Layer manages task execution through worker processes that operate in complete isolation. This layer implements sophisticated queue management and worker lifecycle controls to ensure secure and efficient task processing.
The Storage Layer maintains system state and data through multiple specialized systems. Supabase provides the primary database and real-time capabilities, while MinIO handles file storage and a dedicated secret store manages sensitive credentials.
The selection of Supabase as our database platform represents a strategic architectural decision that extends beyond basic data storage capabilities. Supabase provides several crucial features that are fundamental to our architecture:
Real-time State Management: Supabase's real-time engine enables sophisticated state synchronization across system components. When a deployment status changes or a task completes, all relevant components receive immediate updates. This capability is crucial for maintaining accurate operational state across distributed system components.
Row-Level Security: While we implement complete instance isolation, Supabase's row-level security provides an additional layer of protection. This defense-in-depth approach ensures that even in the hypothetical case of a security boundary breach, the database enforces strict access controls.
Authentication Integration: Supabase's authentication capabilities integrate seamlessly with our OAuth architecture while maintaining complete instance isolation. This includes sophisticated token management, session handling, and security policy enforcement.
sfp pro server provides two deployment models: FLXBL-managed and self-managed instances. In both cases, each organization receives a dedicated compute instance (AWS EC2, Azure VM, or Hetzner Server) with a minimum of 8GB RAM, running a complete Docker Compose-based deployment of the platform.
The system deploys as a collection of Docker containers that implement the complete API surface and processing capabilities:
The authentication architecture implements different patterns for FLXBL-managed and self-managed deployments while maintaining consistent API interfaces:
FLXBL-Managed Authentication:
In FLXBL-managed deployments, the global OAuth callback service handles authentication with social providers (GitHub, GitLab, etc.). This simplifies the setup process while maintaining complete instance isolation. The authentication flow involves:
Social provider authentication through FLXBL's callback service
Token validation and role assignment within the instance
Session management through the instance's authentication service
Self-Managed Authentication:
In self-managed deployments, organizations configure their own OAuth applications and handle callbacks directly. This provides complete control over the authentication process but requires additional setup and maintenance. The authentication service provides APIs for:
OAuth callback handling
Application token management
Role-based access control
Session management
The system implements comprehensive API security through multiple layers:
Edge Security:
TLS termination for all connections
JWT validation for authenticated endpoints
Rate limiting and request validation
API key validation for application access
Authorization Controls:
Role-based access to API endpoints
Resource-level permissions
Operation-specific authorization
Audit logging of all operations
Resource Security:
Tenant-specific resource isolation
Just-in-time credential access
Ephemeral resource allocation
Secure secret management
The system implements a bridge network architecture using Docker Compose, where services are connected through a shared application network. This design provides service isolation while enabling controlled communication between components:
In self-hosted deployments, organizations have flexibility in how they deploy the external services:
Supabase Deployment Options:
Self-hosted Supabase instance on the same network
Self-hosted Supabase instance on a separate network
Managed Supabase instance The deployment choice depends on the organization's security requirements and infrastructure preferences.
Network Configuration:
A bridge network named 'app-network' connects all Docker services
Caddy handles external traffic and TLS termination
Core services communicate through the internal network
Redis provides in-memory caching and queue management within the network
Worker services operate in the same network with controlled access patterns
Service Communication: The API service acts as the central coordinator, with worker services processing tasks based on their priority level:
Critical workers handle time-sensitive operations
Normal workers process standard development tasks
Batch workers manage long-running operations All workers access Redis for queue management and Supabase for data persistence.
The sfp pro server implements a controlled lifecycle management process through the SFP CLI, enabling safe and predictable server maintenance operations. The system orchestrates updates while maintaining data integrity and minimizing service interruption.
The server update process is managed through a sequence of SFP CLI commands that coordinate the update across all components:
The update process begins with stopping the current server instance using the sfp server stop
command. This command ensures a graceful shutdown of all services, allowing in-progress operations to complete and maintaining data consistency.
The sfp server update
command then manages the update process. During this phase, the system:
Downloads updated Docker images
Verifies image integrity
Updates configuration if necessary
Prepares for service initialization
Finally, the sfp server start
command initiates the updated services. The system implements startup order control through Docker Compose dependencies, ensuring services initialize in the correct sequence.
The system maintains version consistency through Docker image tags and configuration versioning:
Each update maintains consistency between:
Docker image versions
Configuration file versions
Database schema versions
Worker service versions
The worker system implements a sophisticated lifecycle management approach that ensures security and reliability:
Worker Initialization Phase: Each worker begins with a clean container initialization. The system creates a new container from a base image, ensuring no residual state or resources from previous executions. This phase establishes the security boundaries and resource limits for the worker.
Credential Management Phase: After initialization, the worker receives just-in-time credentials required for its assigned task. These credentials are loaded and depending on the task context, could be persisted in the temporay file system during the life of the task
Execution Phase: During task execution, the worker operates within strict resource and network boundaries. It maintains communication with the task service for progress reporting and status updates through Supabase. The worker can access only the specific resources and services required for its assigned task.
Cleanup Phase: Upon task completion or termination,the worker is shutdown and the docker container is deleted along wit hte file system
The state management system provides real-time operational visibility while maintaining strict security boundaries:
Operational State Management: The system maintains long-term operational state in Supabase, including deployment histories, configuration data, and audit records. This state is managed through atomic transactions and implements optimistic concurrency control to handle concurrent operations.
Resource State Management: Current resource state, such as environment allocations and active tasks, is maintained in Redis. This provides high-performance access to current state while ensuring consistency across system components. The Redis instance is configured with persistence to prevent state loss during system maintenance.
Task State Management: Active task state is managed through a combination of Redis and Supabase. Redis maintains current task status and progress information, enabling real-time updates through WebSocket connections. Completed task records are persisted to Supabase for historical tracking and audit purposes.
The deployment of sfp pro server requires specific resources to ensure reliable operation:
Instance Requirements: Each organization's dedicated instance requires a modern cloud virtual machine (AWS EC2, Azure VM, or Hetzner Server) with minimum specifications of:
8GB RAM for standard deployments
Modern multi-core CPU (4+ cores recommended)
SSD storage for container and data volumes
Static IP address with direct internet connectivity
Network Requirements: The instance must have appropriate network access:
Outbound access to Salesforce API endpoints
Standard HTTP/HTTPS ports for external access
Internal network isolation capabilities
Access to required authentication providers
Storage Requirements: Storage provisioning must account for:
Database storage for operational data
Redis persistence storage
File storage for temporary processing
Backup storage allocation
sfp pro server had the following requirements that shaped the need for using Supabase as foundational system
Handle authentication and authorization seamlessly for both users and applications
Provide real-time state management to replace our previous Git-based approach
Work equally well in both cloud and self-hosted environments
Scale independently for each organization's needs
Ensure complete data isolation between organizations
Authentication in sfp pro server is built entirely on Supabase Auth, which provides several key advantages:
First, it offers built-in support for multiple authentication methods while maintaining a consistent security model. When users log in through OAuth providers (in FLXBL-managed instances) or through an organization's own authentication system (in self-hosted instances), Supabase Auth handles all the complexity of token management and session control.
Second, it provides a JWT-based authentication system that seamlessly integrates with both interactive users and automated systems. This means whether a request comes from a developer using the CLI, a CI/CD pipeline, or the Codev desktop application, the authentication flow remains consistent and secure.
Each organization in sfp pro server receives its own dedicated Supabase instance. This architectural decision provides several benefits:
This isolation ensures that:
Each organization's data remains completely separate
Performance and scaling can be managed independently
Security boundaries are enforced at the infrastructure level
Organizations maintain control over their data governance
One of the most significant improvements Supabase brings to sfp pro server is in state management. Previously, we stored state information in Git repositories, which led to several challenges:
State updates required Git operations
Real-time visibility was limited
Concurrent updates were difficult to manage
Performance was constrained by Git operations
Supabase's real-time capabilities transformed this approach:
This real-time capability enables:
Immediate visibility into operation status
Live updates without polling
Efficient resource state tracking
Consistent state management across all components
The ability to self-host Supabase instances was crucial for organizations that need to maintain their systems within their own infrastructure. Supabase's open-source nature and comprehensive deployment tooling make this possible while maintaining feature parity with cloud deployments.
When an organization chooses to self-host sfp pro server, they get:
Complete control over their Supabase instance
The ability to integrate with internal systems
Custom backup and retention policies
Direct access to their data and logs
The network architecture of sfp pro server addresses a fundamental challenge in Salesforce DevOps: How do we securely connect various clients and systems to Salesforce while maintaining strict security boundaries and ensuring efficient operations? This challenge becomes even more complex when we consider that the system must handle multiple types of connections simultaneously - from CLI tools, CI/CD systems, webhooks, and long-running deployment operations.
At the entry point of every sfp pro server instance sits Caddy, serving as both a reverse proxy and security gateway. Think of Caddy as the system's front door - it's where all external connections first arrive and are properly routed to their destinations.
Caddy performs several crucial functions that form our first line of defense:
First, it handles all TLS termination, managing certificates automatically whether in FLXBL-managed or self-hosted environments. This ensures all communications are encrypted using modern TLS protocols and cipher suites.
Second, it provides intelligent request routing. When a request arrives, Caddy determines whether it should go to the API service, the WebSocket server for real-time updates, or be handled as a webhook. This routing happens based on the request type and path, ensuring each request reaches the appropriate service.
Third, it implements our first layer of security controls, including rate limiting, basic DDoS protection, and HTTP header standardization. These protections help ensure system stability and security before requests even reach our application layer.
The integration system in sfp pro server needs to handle several distinct types of communication patterns. Let's examine how these work together:
The most critical integration point in our system is with Salesforce organizations. When a task needs to interact with Salesforce, whether for a deployment, metadata retrieval, or org validation, the system follows a carefully orchestrated process:
This process ensures complete security of Salesforce credentials through several key mechanisms:
First, credentials are never stored in the application layer. Instead, they're securely stored in our secret management system and only accessed when needed for specific operations.
Second, when a worker needs to connect to Salesforce, it receives just-in-time credentials that exist only in memory. These credentials are immediately cleared when the operation completes, and the worker process terminates.
Third, each operation gets its own isolated worker process, ensuring that credential access is completely separated between different operations and organizations.
The webhook system handles incoming events from various sources, including Salesforce platform events, GitHub/GitLab notifications, and CI/CD system callbacks. Here's how this process works:
When a webhook arrives:
Caddy performs initial validation and TLS termination
The webhook handler verifies the source's signature
The system maps the event to appropriate tasks
Tasks are created with appropriate priorities
The system provides real-time visibility into operations through a WebSocket-based update system. This enables clients to monitor long-running operations, receive immediate status updates, and track resource utilization in real-time.
The WebSocket connections are managed through the same secure edge layer, ensuring consistent security policies and access controls:
The authentication system in sfp pro server addresses several complex challenges in Salesforce DevOps security. At its core, the system must secure not only user access but also manage machine-to-machine authentication for CI/CD systems, handle Salesforce credentials securely, and maintain complete isolation between different organizations.
The system implements authentication through multiple coordinated layers, each handling specific aspects of security:
Let's examine how authentication works for different types of users and systems:
Interactive User Authentication (FLXBL-Managed Instances):
This flow leverages FLXBL's registered OAuth applications, simplifying the setup for organizations. When a user authenticates:
The CLI initiates the OAuth process
FLXBL's global authentication service handles the OAuth callback
The user's identity is verified and passed to their instance
The instance creates and manages the user's session
CI/CD System Authentication:
CI/CD systems use application tokens that provide limited, scoped access:
Tokens are bound to specific instance and tenant
Each token has defined permission boundaries
Access is logged and auditable
Tokens can be revoked at any time
Self-Hosted Instance Authentication: In self-hosted environments, organizations manage their own OAuth applications:
One of the most critical aspects of the system is how it handles various credentials, particularly Salesforce organization credentials:
The system implements several key security principles:
Just-in-Time Secret Access: Credentials are only loaded when needed:
Workers fetch secrets at task start
Secrets remain only in memory
Credentials are cleared after task completion
No disk storage of sensitive data
Secure Secret Storage: All credentials are stored securely:
Encrypted at rest using tenant-specific keys
Accessible only through secure secret managers
Support for multiple secret management solutions
Regular secret rotation
Access Control: The system implements fine-grained access control:
Role-based access to credentials
Audit logging of all credential access
Restricted access to production credentials
Automated credential rotation support
Each instance maintains its own role and permission system:
The permission system ensures:
Clear separation of duties
Principle of least privilege
Granular access control
Audit trail of all actions
The authentication system addresses several key security requirements:
Tenant Isolation: Complete separation between organizations:
Independent authentication states
Separate credential storage
Isolated session management
No cross-tenant data access
Secure Communication: All communication is encrypted:
TLS for all API calls
Secure WebSocket connections
Encrypted credential transmission
Protected OAuth flows
Audit and Compliance: Comprehensive audit trails:
Authentication attempts logged
Credential access recorded
Session activity tracked
Security events monitored
Failure Handling: Secure failure modes:
Failed authentication logging
Credential access monitoring
Session timeout enforcement
Automated threat detection\
The task processing system in sfp pro server is designed to handle operations ranging from quick metadata validations to long-running deployments. Understanding how this system works is crucial because it forms the core of how work gets done in the platform.
When a user or CI/CD system initiates an operation, it begins a journey through several stages of processing. Let's examine this journey in detail:
Every task in sfp pro server is assigned to one of three processing queues based on its characteristics:
Critical Queue These tasks demand immediate attention and quick processing. They typically involve operations that developers are actively waiting on. For example:
Code linting operations
Quick metadata validations
Configuration checks
Critical tasks receive dedicated worker capacity and are processed ahead of other queues. This ensures developers get immediate feedback for operations that block their work.
Normal Queue The standard processing queue handles most day-to-day development operations. These tasks include:
Package installations
Deployment validations
Environment preparations
Source tracking operations
Normal queue tasks are processed with fair scheduling, ensuring all teams get reasonable response times while managing system resources efficiently.
Batch Queue This queue is designed for resource-intensive, long-running operations that don't require immediate completion. Examples include:
Full test suite executions
Bulk data operations
Nightly builds
Complete org validations
Batch tasks can be preempted by higher priority work and are often scheduled during off-peak hours to maximize resource utilization.
Each worker in sfp pro server follows a strict lifecycle that ensures security and reliability:
Initialization Phase
Worker process spawns in a clean environment
No inherited state or resources
Fresh memory space allocation
Temporary workspace creation
Secret Loading The worker securely loads required credentials:
Salesforce org credentials
GitHub tokens
Other necessary secrets All secrets are loaded just-in-time and held only in memory.
Environment Setup
Prepares necessary tools and dependencies
Configures connection parameters
Sets up logging channels
Initializes progress reporting
Task Execution During execution, the worker:
Processes the assigned task
Reports progress through WebSocket channels
Manages resource utilization
Handles any necessary retries
Completion and Cleanup After task completion:
All secrets are cleared from memory
Temporary files are securely removed
Resources are released
Worker process terminates completely
This ephemeral worker model provides several key benefits:
Complete isolation between tasks
No credential persistence
Clean state for each operation
Predictable resource cleanup
The task processing system includes sophisticated resource management to ensure system stability and fair resource allocation:
Worker Pool Management Each queue type has its own worker pool:
Critical pool maintains minimum available workers
Normal pool scales based on demand
Batch pool uses excess capacity
Resource Quotas The system enforces quotas at multiple levels:
Per-tenant resource limits
Queue-specific allocations
Individual task resource boundaries
Dynamic Scaling Worker pools scale based on:
Current workload
Queue depths
Resource availability
Priority requirements
The task processing system implements robust error handling to maintain reliability:
Task Failure Handling When a task fails, the system:
Captures detailed error information
Provides error context through WebSocket updates
Maintains failure state for analysis
Cleans up any partial changes
Worker Failure Recovery If a worker fails unexpectedly:
The task is marked as failed
Resources are forcefully cleaned up
Client is notified of the failure
System maintains audit trail
Queue Recovery During system restarts or failures:
Queue state is preserved
Incomplete tasks are requeued
Progress information is maintained
Clients can reconnect to existing tasks
The error handling mechanism ensures that system stability is maintained even during unexpected failures, while providing clear feedback to users about any issues that arise.
The authentication system in sfp pro server implements an approach to security that handles both interactive users and application tokens differently. Let's understand how this system works in detail, particularly focusing on the strict token handling approach that prioritizes security over convenience.
The system supports two primary authentication paths, each with its own security considerations and handling patterns:
When a user authenticates through the UI or CLI, the system follows a chain of validations:
Token Validation:
Verifies JWT signature using the configured secret
Checks token expiration with a 5-minute buffer
Validates the token issuer and audience claims
Membership Verification:
Retrieves the user's personal account
Fetches associated memberships
Verifies role assignments
Role Authorization:
Implements a hierarchical role system ('member' → 'owner')
Validates required roles against user's assigned role
Enforces role-based access control on endpoints
The system takes a deliberately strict approach to application token management:
Key characteristics of this approach:
Strict Token Validation:
No automatic token renewal
Explicit rejection of expired tokens
Clear error messages indicating token status
Security-First Design:
Tokens must be manually rotated
No grace period for expired tokens
Clear audit trail of token usage
Clear Separation of Concerns:
Application tokens are distinct from user tokens
Different validation paths for each token type
Specific permissions for application tokens
The system implements a sophisticated role-based access control system:
This role system ensures:
Clear permission boundaries
Hierarchical access control
Separate application permissions
Granular access management
The authentication system follows several key implementation patterns:
Early Validation: The AuthGuard performs token validation before any request processing begins. This ensures that:
Invalid requests are rejected immediately
No resources are wasted on unauthorized requests
Security checks are consistent across all endpoints
Layered Verification: Authentication happens in distinct layers:
Retry Management: The system implements sophisticated retry handling for database operations:
Configurable retry attempts for transient failures
Exponential backoff with randomization
Clear distinction between retryable and non-retryable errors
Comprehensive error logging for debugging
Error Handling: The system provides clear, secure error responses:
Generic errors for unauthenticated requests
Specific errors for authenticated users
No information leakage in error messages
Comprehensive error logging for administrators
The system supports both global and local authentication modes:
This dual-mode support enables:
Flexibility in deployment options
Consistent security model across modes
Clear separation of concerns
Support for both cloud and self-hosted scenarios
Understanding this authentication architecture has important implications for system usage:
For CI/CD Integration:
Plan for token rotation strategies
Implement proper error handling for token expiration
Consider using multiple tokens for different environments
Monitor token usage and expiration
For Application Development:
Implement proper token management
Handle authentication failures gracefully
Consider role requirements when designing integrations
Plan for token rotation in your application lifecycle
For System Administration:
Regular token audit and cleanup
Clear token provisioning processes
Monitoring of authentication patterns
Alert setup for suspicious activities
This strict approach to token management, while requiring more operational overhead, provides several security benefits:
Clear token lifecycle
No ambiguous token states
Predictable security boundaries
Easier security auditing
Reduced attack surface
sfp proserver provides an integration architecture that enables external systems to interact with Salesforce DevOps operations. Understanding how to build integrations with the system requires understanding several key concepts and patterns.
The system provides several primary integration mechanisms:
The REST API is the primary method for programmatic interaction with sfp proserver. It's designed around several core concepts:
Resource Organization: The API is organized into logical groupings:
Task Management (/api/tasks
)
Salesforce Authentication (/api/auth/salesforce
)
Document Management (/api/doc-store
)
Key-Value Operations (/api/key-value
)
Webhook Management (/api/webhooks
)
Authentication Patterns: All API requests require authentication through:
OAuth-based tokens for interactive users
Application tokens for automated systems
The task system is central to integration scenarios:
When integrating with the task system:
Task Creation: Submit tasks with appropriate priorities:
Critical for time-sensitive operations
Normal for standard operations
Batch for background processing
Status Monitoring: Track task progress through either:
WebSocket connections for real-time updates
REST API polling for simpler integrations
Result Handling: Process task results based on operation type:
Direct results for quick operations
Staged results for long-running tasks
Error handling for failed operations
The webhook system enables event-driven integration patterns:
The webhook system provides:
Event Configuration:
Define webhook endpoints
Configure event filters
Set retry policies
Manage security headers
Delivery Management:
Asynchronous event delivery
Automatic retries
Delivery status tracking
Error handling
The document store provides a flexible system for managing complex state:
Collection Organization:
Hierarchical document organization
Cross-collection queries
Version tracking
Optimistic concurrency
Query Capabilities:
When building integrations with sfp pro server, consider these patterns:
Authentication Flow:
Operation Patterns: Choose the appropriate integration pattern:
Synchronous for quick operations
Asynchronous for long-running tasks
Event-driven for state changes
WebSocket for real-time updates
Error Handling: Implement robust error handling:
Token expiration handling
Task failure recovery
Network resilience
Rate limiting compliance
Let's examine some common integration patterns:
CI/CD Integration:
Environment Management:
State Synchronization:
When building integrations, observe these security practices:
Token Management:
Rotate application tokens regularly
Use scoped tokens for specific operations
Implement secure token storage
Monitor token usage
Webhook Security:
Validate webhook signatures
Use HTTPS endpoints
Implement request timeouts
Filter sensitive data
Error Handling:
Handle authentication failures gracefully
Implement retry mechanisms
Log security events
Monitor for unusual patterns