This post is the first in a two-part series on migrating workloads from Azure Commercial Cloud to Azure Government Cloud (AGC), with FedRAMP High compliance as a core requirement.
Our client, a full-stack edge computing platform provider, deployed their solutions on Azure Kubernetes Service (AKS) with 40+ microservices for enterprise customers. The platform served this customer base well, but a new government contract changed the stakes. Workloads now had to meet FedRAMP High compliance, which enforces strict controls for handling government data.
The client's infrastructure deployment pipelines relied on GitHub Actions, which are not FedRAMP-compliant. Without a compliant automation framework, provisioning and managing more than 40 microservices would be a challenge.
We worked with the client to design a compliant automation framework that replaced GitHub Actions while keeping delivery fast and reliable. In the sections that follow, I’ll share the solutions we delivered and the lessons organizations can draw from this journey.
Automation Choices for AGC
To be fully compliant, every environment in the Azure Government Cloud has to meet strict controls for security, networking, and resource consistency. That left us with three possible paths forward:
- Manual deployments: An option that would technically keep us compliant, but at the cost of speed, efficiency, and reliability.
- Adopt a FedRAMP-compliant CI/CD platform: Tools like Azure DevOps Services (Gov), GitLab Ultimate (self-managed with FIPS), Jenkins on hardened images, or Harness Gov-ready edition were viable options. But they would mean extra costs, integration hurdles, and a steep learning curve, especially since the client’s pipelines were already tightly bound to GitHub Actions in the commercial cloud.
- Build a custom automation framework: The framework would leverage Azure-native tooling, Infrastructure as Code, and orchestration logic to stay within compliance boundaries.
After weighing the trade-offs, we recommended a custom framework tailored to the client’s environment. This approach would preserve operational efficiency and avoid the overhead and disruption of adopting a completely new CI/CD platform.
Mapping the Existing Infrastructure
The client’s infrastructure had grown organically in Azure Commercial Cloud. It was a complex ecosystem with Kubernetes clusters, virtual networks, databases, storage accounts, Key Vaults, container registries, monitoring systems, and security tooling. Documentation was sparse. Many dependencies were tribal knowledge, and critical services had evolved over the years without a clear architectural blueprint.
To gain clarity, we held inventory sessions to identify components and their dependencies and engaged service owners to map responsibilities and uncover hidden integrations. By the end of this exercise, we had mapped hundreds of components into a structured inventory. This blueprint gave us the clarity needed to design Infra Deployer, our automation framework that would power the migration.
How Infra Deployer Works
At its core, Infra Deployer combines Python orchestration with Azure Bicep templates to deliver secure, auditable, and repeatable infrastructure provisioning. It executes deployments in a containerized environment, giving engineers full control of runtime execution and ensuring that every step aligns with FedRAMP High standards.
Architecture Overview
Infra Deployer is structured into three logical layers:
- Execution Interface
- Commands run inside a Docker container.
- Workflow YAMLs and config files are mounted at runtime.
- Standardized runtime guarantees isolation and compliance.
- Core Engine
- Written in Python, this layer handles orchestration logic.
- Key components include the Config Loader, Workflow Loader, Workflow Executor, and Azure CLI integration.
- Acts as the control plane for resource creation.
- Infrastructure Layer
- Azure Bicep templates with environment-specific parameters.
- Provides declarative, version-controlled resource definitions for everything from networks to AKS clusters.
This layered design gives Infra Deployer separation of concerns, flexibility to adapt workflows, and built-in compliance alignment.

Implementation Walkthrough
The next challenge was applying Infra Deployer to the client’s live workloads. We tackled this in five stages:
1. Configuration Management with Config Loader
Deploying to both Azure Commercial and Government Cloud can present some challenges. Each environment uses different region names, subscription IDs, resource groups, and endpoints. Hardcoding these values or scattering them across scripts can quickly become unmanageable.
To address this, we created a single source of truth: a simple config.yaml
file. This file defined everything that differed between environments, such as:
- Cloud Type
- Default subscription and resource group
- Location
- Directory paths for Bicep templates and parameter files
Together, these keys gave us a centralized control plane that allowed the same automation engine to work seamlessly across Azure clouds.

We also needed a way to read, validate, and act on it during deployments. That’s where the Config Loader of our solution came in. It loads the config.yaml
, makes sure all the required values are present, and prepares the environment for what comes next.

Under the hood, it provides a handful of critical methods:
load_config()
to read and validate the YAML fileset_cloud()
to select the correct Azure endpoint (Commercial, Government)az_login()
to handle authenticationrun_az_command()
to safely execute Azure CLI commands using the config values
By separating what to deploy (templates) from where to deploy (config), we ensured consistent, repeatable deployments across Azure environments.
2. Dynamic Parameter Generation with Bicep Parser
Managing parameters across multiple environments is a common challenge in Infrastructure as Code (IaC). This was addressed using Bicep Parser.
The Parser:
- Parses Bicep templates using regex to extract parameter names, types, and defaults.
- Cross-references parameters with the environment-specific YAML configuration file.
- Generates parameter files dynamically with defaults, environment values, or compound variables such as SSH keys.
- Supports federated identity mappings and dynamic role assignments.

3. Turning Workflows into Executable Steps
The next challenge was orchestrating the deployment. We handled this with Workflow Loader, a component of our Infra Deployer that reads workflow definitions from a YAML file and converts them into executable deployment steps.
The workflow file defines which resources to deploy, in what order, and with which dependencies. For example, a simple steps.yaml
might provision a network, deploy an AKS cluster, and then configure monitoring.

For more complex scenarios, the workflow supports advanced controls like retries, error handling, and output management. Instead of just "deploy an AKS cluster," a workflow specifies how it should be deployed, which files to use, and what to do if a step fails. Attributes like dependsOn
ensure proper sequencing, while error handling enables self-healing with configurable retries.
Workflow Loader provides three core capabilities:
load_workflow()
– Reads and validates the workflow YAML file.parse_steps()
– Breaks down the workflow into individual, executable steps.workflow_path()
– Ensures all file references (templates, parameters, scripts) are correctly located before execution.
Together with the Config Loader, the Workflow Loader made Infra Deployer environment-aware and process-aware. The Config Loader manages which settings to apply, while the Workflow Loader manages step execution. This separation of concerns allowed us to change environments via a simple tweak to config.yaml
or adjust the workflow logic without modifying the infrastructure code itself.
4. Workflow Orchestration
The Workflow Executor in Infra Deployer drives deployment workflows, translating defined steps into actions across Azure environments.
Its responsibilities are threefold:
- Execution control – The
run()
method manages the main execution loop, ensuring steps run in the correct order. - Step handling – The
execute_step()
method converts YAML-defined actions into actual Azure resource deployments. - Dynamic adjustments – Methods such as
update_param_if_null()
andupdate_with_custom_params()
apply environment-specific or user-supplied values, letting deployments adapt automatically without manual intervention.
Together, these capabilities form a flexible execution layer that ensures deployments are validated, correctly configured, and fully auditable.
5. From Configuration to Provisioning
To standardize execution, we had Azure CLI, a wrapper around Azure CLI commands. This component formed the execution layer of Infra Deployer, ensuring that all operations are secure, reusable, and aligned with FedRAMP compliance across environments.
This approach was especially valuable for database access, where creating PostgreSQL roles and credentials can be error-prone.
By combining a custom Python script with Azure CLI, we automated database provisioning:
- Programmatically created PostgreSQL roles
- Randomly generated passwords and stored them securely in Azure Key Vault
- Consistently applied access policies across environments
Beyond database management, Azure CLI provides a standardized set of reusable operations, abstracting common CLI actions into reliable methods that eliminate repetitive scripting:
run_command()
– Executes Azure CLI command securely.list_acr()
– Lists container registries within a resource group.register_provider()
– Registers Azure resource providers when needed.show_provider_registration()
– Verifies provider registration status for compliance.
With these capabilities, every deployment, whether networks, AKS clusters, databases, or container registries, became part of a repeatable, auditable, and compliant workflow.
Outcomes and Impact
With Infra Deployer, we could set up a fully FedRAMP High-compliant Azure Government Cloud environment in about two hours. Over 30 resources were deployed automatically, every action was logged for audits, and development, staging, and production environments were identical from day one.
Metric | Result |
Deployment Time | ~2 hours (end-to-end) |
Resources Deployed | 30+ fully automated |
Configuration Drift | 0% (identical across envs) |
Audit Coverage | 100% automated logging |
Compliance Achievements
Compliance was embedded from the start. Secrets were managed securely in Azure Key Vault, RBAC was enforced across deployments, and audit-ready logging required no additional effort. This approach eliminated configuration drift and reduced errors by more than 95 percent.
The impact was immediate. Teams onboarded faster because environments were operational in a few hours. Compliance requirements were met without slowing delivery, and the framework provided a repeatable model for scaling securely.
Technically, early Bicep template validation caught issues before deployment. Automating all possible steps minimized errors and ensured consistency across environments, while rigorous secret management reinforced both stability and compliance.
Conclusion
This project demonstrates that organizations do not have to choose between agility and compliance. With a well-designed automation framework, it is possible to achieve both.
That said, infrastructure was just the starting point. The next challenge was scaling applications: deploying and managing 40+ interconnected microservices with secure secrets management, compliance alignment, and controlled rollouts.
In Part 2 of this series, we will explore how KubePilot streamlined Kubernetes deployments at scale, creating a consistent and compliant orchestration layer on top of the infrastructure.