Once the agent is running, this page covers everything you need to configure in the EKS Manager app — from the initial steps required before a cluster can function, to the ongoing operations you perform over time. All operations are available via the GUI and API.
These steps must be completed in order before a cluster is usable. The zone must exist before the core stack can be installed, and the core stack must be installed before any other per-cluster configuration.
A DNS zone must be created in the spoke account before any cluster can be provisioned there. The Route 53 hosted zone lives in the same account as the cluster. You choose how the wildcard certificate is issued.
*.dev.yourdomain.internal. Stored in Secrets Manager. Good for testing and development — not recommended for production.*.dev.yourdomain.com. Uploaded to Secrets Manager. This is the recommended path for production environments.| Parameter | Description |
|---|---|
| domain | Root domain for this environment e.g. dev.yourdomain.com |
| account-id | Spoke account where the Route 53 hosted zone will be created — must be in the EKS OU |
| cert-mode | self-signed or byo |
| cert byo only | Wildcard certificate from your internal CA in PEM format |
| cert-key byo only | Private key for the certificate in PEM format |
The agent auto-detects accounts in the EKS OU and registers them in the GUI. Installing the core stack is the action that makes a cluster managed. Once triggered, the agent provisions the cluster and configures everything it needs to operate independently.
dns_zone_cert secret into each one. On any subsequent cert renewal, all copies across all namespaces are updated automatically.| Parameter | Description |
|---|---|
| cluster-name | Name for the EKS cluster |
| account-id | Spoke account to provision the cluster into — must have a zone already created |
| region | AWS region — must match the zone and hub account region |
| eks-admin-arn | IAM or SSO permission set role ARN granted cluster-admin on this cluster |
App users are the people who can log into and use the EKS Manager application. Authentication is handled by AWS Cognito — email, password, and 2FA. This is separate from cluster-level access which is managed via the EKS admin ARN and SSO.
| Parameter | Description |
|---|---|
| User's email address — used as the Cognito username | |
| role | App role — admin or viewer |
The user receives an email invitation and sets their own password. 2FA is enforced on first login.
These operations are performed after the core stack is installed. They can be done in any order and revisited over time as your requirements change.
Headlamp is a Kubernetes UI installed per cluster. Authentication is SAML-based via Microsoft Entra (Azure AD). Because each cluster has its own ACS URL, you need to create a separate Entra Enterprise Application per cluster.
| Parameter | Description |
|---|---|
| cluster-name | Target cluster to install Headlamp on |
| tenant-id | Microsoft Entra tenant ID |
| saml-metadata-url | SAML metadata URL from the Entra Enterprise Application for this cluster |
EKS Manager installs and manages ArgoCD on the cluster. Local ArgoCD accounts are used for both human operators and CI/CD service accounts — these are local accounts, not Entra/SSO users. Two global roles are available, definable via the API.
| Role | Purpose | Permissions |
|---|---|---|
| global-admin | Human operators, CI/CD service accounts | Full ArgoCD access — create, update, sync, and delete apps across all clusters |
| global-readonly | Observers, auditors, dashboards | Read-only access across all apps and clusters — no write or sync permissions |
| Parameter | Description |
|---|---|
| cluster-name | Target cluster to install ArgoCD on |
| admin-password | Initial ArgoCD admin password — stored in Secrets Manager under /EksManager/argocd/<cluster> |
Add local accounts after ArgoCD is installed. The CI/CD service account used by the k8s-deploy workflow requires global-admin as it manages the full app lifecycle including deploys and deletions.
k8s-deploy workflow authenticates to ArgoCD using the global-admin credentials. These must be set as encrypted secrets in your GitHub repository or organisation:| Parameter | Description |
|---|---|
| cluster-name | Target cluster |
| username | ArgoCD local account username e.g. ci-deploy |
| role | global-admin or global-readonly |
GitOps Manager™ open-source workflows
Both GitHub Apps below are required to use the GitOps Manager CI/CD toolchain. Together they provide a complete, secure, on-demand GitOps pipeline — container builds run privately inside your own EKS cluster, images are pushed to your hub ECR, and deployments are synced to Kubernetes only when explicitly triggered.
{{ cluster_name }}, {{ namespace }}, {{ dns_zone }}), applies Kustomize, commits rendered manifests to the continuous-deployment repo, and triggers an ArgoCD sync. Deployments are on-demand — ArgoCD never auto-syncs, it only syncs when the workflow explicitly triggers it.clusters/<cluster>/namespaces/<app>/ tree that builds up over time as a source of truth, you can clone an entire cluster's workloads to a new cluster by re-running the deploy workflow against a different cluster target. This makes full cluster restore or environment cloning straightforward — no manual manifest reconstruction required.This GitHub App gives ArgoCD read access to your private continuous-deployment repository. The repo holds a clusters/<cluster>/namespaces/<app>/ tree that the k8s-deploy workflow writes rendered Kustomize manifests into. ArgoCD watches its cluster's subtree and syncs only when explicitly triggered by the deploy workflow.
| Parameter | Description |
|---|---|
| github-org | GitHub organisation name |
| cd-repo | Name of the private continuous-deployment repository |
| app-id | GitHub App ID from the app settings page |
| private-key | GitHub App private key in PEM format — stored in Secrets Manager under /EksManager/github/cd |
Installs GitHub Actions Runner Controller (ARC) into the cluster using the multicloud-build-action open source toolchain. Runners build container images privately inside your EKS cluster using EKS pod identity to push directly to the hub ECR — no credentials stored, no external build infrastructure. A separate GitHub App is used for ARC runner registration, distinct from the CD app.
| Parameter | Description |
|---|---|
| cluster-name | Target cluster to install ARC runners on |
| namespace | Kubernetes namespace to deploy the runner controller into |
| github-org | GitHub organisation name |
| app-id | GitHub App ID for the ARC runner app |
| private-key | GitHub App private key in PEM format — stored in Secrets Manager under /EksManager/github/runner/<cluster> |
Secrets are created via the GUI or API and relayed directly to your hub account's AWS Secrets Manager under /EksManager/*. Once stored, secrets never leave your cloud. Deployment to clusters is on-demand — the agent pushes the secret as a Kubernetes secret only when triggered by you.
| Parameter | Description |
|---|---|
| secret-name | Name for the secret — stored under /EksManager/<secret-name> in Secrets Manager |
| value | Secret value — write-only, not readable back through EKS Manager after creation |
| clusters | List of cluster names to deploy the secret to |
| namespaces | List of namespaces within each cluster to create the Kubernetes secret in |
Over time you may need to renew certs, add new domains, or associate additional clusters with an existing zone. When a cert is updated, the agent automatically scans all namespaces across all clusters associated with that zone and pushes the updated dns_zone_cert secret to every one.
| Parameter | Description |
|---|---|
| zone-id | Existing zone to update |
| cert-mode | self-signed or byo |
| cert byo only | New wildcard certificate in PEM format |
| cert-key byo only | New private key in PEM format |
Need to revisit the bootstrap process or understand what the agent does under the hood? The onboarding page covers Phase 1 and the technical detail of cluster provisioning.