Cluster Setup

Configure & manage your clusters

Once the agent is running, this page covers everything you need to configure in the EKS Manager app — from the initial steps required before a cluster can function, to the ongoing operations you perform over time. All operations are available via the GUI and API.

Required to function
Initial cluster setup

These steps must be completed in order before a cluster is usable. The zone must exist before the core stack can be installed, and the core stack must be installed before any other per-cluster configuration.

STEP 1
Create zone
Per spoke account · prerequisite

A DNS zone must be created in the spoke account before any cluster can be provisioned there. The Route 53 hosted zone lives in the same account as the cluster. You choose how the wildcard certificate is issued.

Certificate modes
Self-signed testing
EKS Manager generates a one-off wildcard self-signed cert for an internal domain e.g. *.dev.yourdomain.internal. Stored in Secrets Manager. Good for testing and development — not recommended for production.
Corporate CA / BYO cert enterprise
You provide a wildcard cert issued by your internal CA e.g. *.dev.yourdomain.com. Uploaded to Secrets Manager. This is the recommended path for production environments.
Required parameters
ParameterDescription
domainRoot domain for this environment e.g. dev.yourdomain.com
account-idSpoke account where the Route 53 hosted zone will be created — must be in the EKS OU
cert-modeself-signed or byo
cert byo onlyWildcard certificate from your internal CA in PEM format
cert-key byo onlyPrivate key for the certificate in PEM format
STEP 2
Install core stack
Per cluster · triggers management

The agent auto-detects accounts in the EKS OU and registers them in the GUI. Installing the core stack is the action that makes a cluster managed. Once triggered, the agent provisions the cluster and configures everything it needs to operate independently.

What installing the core stack does
1
Provisions the EKS cluster
Creates the EKS cluster in the spoke account using the zone and account provided.
2
Creates a per-cluster KMS key
A dedicated KMS key is created in the spoke account and used to encrypt etcd secrets at rest from day one. The key is owned by the spoke account and never leaves it.
3
Creates the node role
A node role is created with the required managed policies for ECR image pulling, EKS worker registration, SSM access, CNI networking, autoscaling, and EBS CSI driver support.
4
Creates pod identity associations
An ARC pod identity for ECR image push to the hub registry, and a Route 53 pod identity for DNS record management in the cluster zone.
5
Deploys zone cert to all namespaces
The agent scans all namespaces and pushes the dns_zone_cert secret into each one. On any subsequent cert renewal, all copies across all namespaces are updated automatically.
Required parameters
ParameterDescription
cluster-nameName for the EKS cluster
account-idSpoke account to provision the cluster into — must have a zone already created
regionAWS region — must match the zone and hub account region
eks-admin-arnIAM or SSO permission set role ARN granted cluster-admin on this cluster
STEP 3
Manage app users
App-level · anytime

App users are the people who can log into and use the EKS Manager application. Authentication is handled by AWS Cognito — email, password, and 2FA. This is separate from cluster-level access which is managed via the EKS admin ARN and SSO.

Required parameters
ParameterDescription
emailUser's email address — used as the Cognito username
roleApp role — admin or viewer

The user receives an email invitation and sets their own password. 2FA is enforced on first login.

Ongoing operations
Per-cluster configuration

These operations are performed after the core stack is installed. They can be done in any order and revisited over time as your requirements change.

OPTIONAL
Setup Headlamp
Per cluster

Headlamp is a Kubernetes UI installed per cluster. Authentication is SAML-based via Microsoft Entra (Azure AD). Because each cluster has its own ACS URL, you need to create a separate Entra Enterprise Application per cluster.

⚠️ Manual prereq — Entra Enterprise App
Before running this operation, create an Enterprise Application in your Microsoft Entra tenant for this cluster. You must be App Owner to add the reply URL. Note the SAML metadata URL from the app — it is required below.
Required parameters
ParameterDescription
cluster-nameTarget cluster to install Headlamp on
tenant-idMicrosoft Entra tenant ID
saml-metadata-urlSAML metadata URL from the Entra Enterprise Application for this cluster
REQUIRED
Setup ArgoCD
Per cluster

EKS Manager installs and manages ArgoCD on the cluster. Local ArgoCD accounts are used for both human operators and CI/CD service accounts — these are local accounts, not Entra/SSO users. Two global roles are available, definable via the API.

Available roles
RolePurposePermissions
global-admin Human operators, CI/CD service accounts Full ArgoCD access — create, update, sync, and delete apps across all clusters
global-readonly Observers, auditors, dashboards Read-only access across all apps and clusters — no write or sync permissions
Install ArgoCD — required parameters
ParameterDescription
cluster-nameTarget cluster to install ArgoCD on
admin-passwordInitial ArgoCD admin password — stored in Secrets Manager under /EksManager/argocd/<cluster>
Add ArgoCD users — required parameters

Add local accounts after ArgoCD is installed. The CI/CD service account used by the k8s-deploy workflow requires global-admin as it manages the full app lifecycle including deploys and deletions.

GitHub repository environment variables required
The k8s-deploy workflow authenticates to ArgoCD using the global-admin credentials. These must be set as encrypted secrets in your GitHub repository or organisation:
ARGO_CD_ADMIN_USER # the global-admin username you created above
ARGO_CD_ADMIN_PASSWORD # the global-admin password
ParameterDescription
cluster-nameTarget cluster
usernameArgoCD local account username e.g. ci-deploy
roleglobal-admin or global-readonly
Open source CI/CD toolchain

GitOps Manager™ open-source workflows

Both GitHub Apps below are required to use the GitOps Manager CI/CD toolchain. Together they provide a complete, secure, on-demand GitOps pipeline — container builds run privately inside your own EKS cluster, images are pushed to your hub ECR, and deployments are synced to Kubernetes only when explicitly triggered.

gitopsmanager/multicloud-build-action open source
Multi-cloud GitHub Action that builds and pushes Docker images with Docker Bake to AWS ECR using EKS pod identity and BuildKit caching. Builds run privately inside your own EKS cluster via GitHub ARC — no credentials exposed, no external build infrastructure required.
gitopsmanager/k8s-deploy open source
Reusable GitHub Actions workflow for GitOps-driven Kubernetes deployments. Resolves cluster-specific template variables ({{ cluster_name }}, {{ namespace }}, {{ dns_zone }}), applies Kustomize, commits rendered manifests to the continuous-deployment repo, and triggers an ArgoCD sync. Deployments are on-demand — ArgoCD never auto-syncs, it only syncs when the workflow explicitly triggers it.
Deploy flow
Code push → ARC runner builds container privately in EKS
          → pushes image to hub ECR (image tag updated in ECR)
          → deploy action triggered (by user or automatically by build workflow)
          → template variables resolved per cluster
          → Kustomize patches image tag + ingress from ECR
          → rendered manifests committed to clusters/<cluster>/namespaces/<app>/
          → ArgoCD sync triggered against that path
          → K8s deployment updated
Restore cluster
Because the continuous-deployment repo holds a clusters/<cluster>/namespaces/<app>/ tree that builds up over time as a source of truth, you can clone an entire cluster's workloads to a new cluster by re-running the deploy workflow against a different cluster target. This makes full cluster restore or environment cloning straightforward — no manual manifest reconstruction required.
REQUIRED
Setup GitHub App — Continuous Deployment repo
Per GitHub org

This GitHub App gives ArgoCD read access to your private continuous-deployment repository. The repo holds a clusters/<cluster>/namespaces/<app>/ tree that the k8s-deploy workflow writes rendered Kustomize manifests into. ArgoCD watches its cluster's subtree and syncs only when explicitly triggered by the deploy workflow.

⚠️ Manual prereq — GitHub App
Create a GitHub App in your GitHub organisation and install it on your private continuous-deployment repository. Note the App ID and generate a private key — both are required below.
Required parameters
ParameterDescription
github-orgGitHub organisation name
cd-repoName of the private continuous-deployment repository
app-idGitHub App ID from the app settings page
private-keyGitHub App private key in PEM format — stored in Secrets Manager under /EksManager/github/cd
REQUIRED
Setup GitHub App — ARC Runner
Per cluster

Installs GitHub Actions Runner Controller (ARC) into the cluster using the multicloud-build-action open source toolchain. Runners build container images privately inside your EKS cluster using EKS pod identity to push directly to the hub ECR — no credentials stored, no external build infrastructure. A separate GitHub App is used for ARC runner registration, distinct from the CD app.

⚠️ Manual prereq — GitHub App
Create a separate GitHub App in your GitHub organisation for ARC runner registration and install it on your organisation. Note the App ID and generate a private key — both are required below.
Required parameters
ParameterDescription
cluster-nameTarget cluster to install ARC runners on
namespaceKubernetes namespace to deploy the runner controller into
github-orgGitHub organisation name
app-idGitHub App ID for the ARC runner app
private-keyGitHub App private key in PEM format — stored in Secrets Manager under /EksManager/github/runner/<cluster>
ONGOING
Secrets deployment
Per secret · on-demand

Secrets are created via the GUI or API and relayed directly to your hub account's AWS Secrets Manager under /EksManager/*. Once stored, secrets never leave your cloud. Deployment to clusters is on-demand — the agent pushes the secret as a Kubernetes secret only when triggered by you.

🔒 Security model
WRITE-ONLY   Secret values are never readable back through EKS Manager — once written they cannot be retrieved via the GUI or API
VERSIONED   A new value can be pushed on top of an existing secret — the old value is never exposed
NOT PERSISTED   The EKS Manager server acts as a transient relay only — the secret travels over TLS via SignalR to the agent and is written directly to Secrets Manager in your hub account. Nothing is stored server-side.
END-TO-END TLS   Both the HTTPS leg (GUI → server) and the SignalR leg (server → agent) are TLS encrypted — the secret is never in plaintext on the wire
Secret relay flow
User enters secret in GUI
  → POSTed to server over HTTPS (TLS)
  → server relays to agent via SignalR (TLS) — not persisted
  → agent writes to Secrets Manager in hub account
  → secret never leaves customer cloud from this point
  → on deploy trigger: agent reads from Secrets Manager
  → pushes as Kubernetes secret to target clusters/namespaces
Required parameters
ParameterDescription
secret-nameName for the secret — stored under /EksManager/<secret-name> in Secrets Manager
valueSecret value — write-only, not readable back through EKS Manager after creation
clustersList of cluster names to deploy the secret to
namespacesList of namespaces within each cluster to create the Kubernetes secret in
ONGOING
Manage zones & certs
Per spoke account · on-demand

Over time you may need to renew certs, add new domains, or associate additional clusters with an existing zone. When a cert is updated, the agent automatically scans all namespaces across all clusters associated with that zone and pushes the updated dns_zone_cert secret to every one.

Renew or replace a cert
ParameterDescription
zone-idExisting zone to update
cert-modeself-signed or byo
cert byo onlyNew wildcard certificate in PEM format
cert-key byo onlyNew private key in PEM format
Next steps
Back to onboarding

Need to revisit the bootstrap process or understand what the agent does under the hood? The onboarding page covers Phase 1 and the technical detail of cluster provisioning.

← Onboarding Contact Us →