Deploy ArgoCD on AWS

Introduction

This tutorial covers how to handle the setup of ArgoCD in a multicluster AWS environment. Since it took me quite some time figuring out how to get this to work, I would like to share a working approach with you.

If you copy and paste every command in this tutorial, you should end up with a working setup. I’ll try to explain the mechanics a little bit along the way, but to truly understand everything I recommend reading the AWS and ArgoCD documentation.

The setup we’ll be creating is the following:

  • A management cluster that will host ArgoCD.

  • An AWS IAM role “role/ArgoCD” that ArgoCD will assume.

  • A testing cluster that we’ll deploy a guestbook application into.

  • An AWS IAM role “role/Deployer” that has permissions to deploy applications in your testing cluster.

Screenshot 2021-01-30 at 11.28.49.png

EKS Clusters hosted by AWS do cost money, so please be careful to clean up any resources you end up creating. At the end of the tutorial I’ll provide a couple of commands that will clean everything up.

I’ve tested all the commands I use on OSX, they will probably also work on Linux. If you use this on Windows you will have to rework some of the templating commands.

Prerequisites

Steps to take

Before we start I want to give you a brief overview of the steps we will follow and the estimated time they will take to complete.

Create management cluster

  • Provision the management cluster (15 minutes)

  • Create AWS IAM Role (5 minutes)

Setting up ArgoCD

  • Install ArgoCD (5 minutes)

  • Patch ArgoCD (5 minutes)

Creating testing cluster

  • Provision the testing cluster (15 minutes)

  • Create AWS IAM Role (5 minutes)

Deploy Guesbook application 

  • Register testing cluster (2 minutes)

  • Register guestbook application (5 minutes)

Cleanup resources (15 minutes)

Create management cluster

Provision the management cluster

We’ll create two clusters in a single AWS account. The configuration will also work with cross-account and multi-cluster setups, by setting the proper trust relations between IAM roles.

eksctl create cluster --name management --with-oidc

The above command creates the management cluster and everything it needs to function (VPC, security groups, EC2 nodegroup, etc). You will understand this command can take quite some time to complete (10 to 20 minutes, so go get yourself a cup of coffee) but don’t worry the rest of this tutorial won’t take too long.

We’ve created the management cluster with an oidc provider. The AWS Authenticator packaged with ArgoCD will use this provider to acquire a token. With this it can assume the AWS IAM role “role/ArgoCD” that we’ll create next.

Creating AWS IAM role

Now lets create an AWS IAM role that ArgoCD can use. AWS has detailed documentation on this subject at: https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html. We'll use the part described at “To create your IAM role with the AWS CLI”  and adjust it to our needs.

First set our AWS account ID and the OIDC_PROVIDER of the management cluster as environment variables:

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)

OIDC_PROVIDER=$(aws eks describe-cluster --name management --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")

Create a trust.json file with the variables you set. Herein we also reference the ArgoCD namespace and make it so all argocd clusterroles can assume the AWS IAM role we’ll create in the next step (system:serviceaccount:argocd:*)

read -r -d '' TRUST_RELATIONSHIP <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringLike": {
          "${OIDC_PROVIDER}:sub": "system:serviceaccount:argocd:*"
        }
      }
    }
  ]
}
EOF

echo "${TRUST_RELATIONSHIP}" > trust.json && cat trust.json

Make sure trust.json includes the proper AWS_ACCOUNT_ID and OIDC_PROVIDER. We will use trust.json while creating the AWS IAM Role:

aws iam create-role --role-name ArgoCD --assume-role-policy-document file://trust.json --description "IAM Role to be used by ArgoCD to gain AWS access"

Finally we’ll create an inline policy that gives the IAM role the ability to assume other roles. We need this so ArgoCD can assume the Deployer role we’ll create later.

(We could also use the ArgoCD role directly for that purpose, but I find having a separate role to use to Deploy resources into the testing cluster to be more flexible and more secure.)

read -r -d '' POLICY <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AssumeRole",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "*"
        }
    ]
}
EOF

echo "${POLICY}" > policy.json && cat policy.json

aws iam put-role-policy --role-name ArgoCD --policy-name AssumeRole --policy-document file://policy.json

You can use the AWS web console to check the role we just created. The main thing to note is that we created this role so that the service accounts (kubernetes) that ArgoCD uses in the management cluster have an AWS IAM Role to assume through the OIDC provider.

Setting up ArgoCD

Install ArgoCD

Create a namespace for argocd in the management cluster

kubectl create namespace argocd

wget https://raw.githubusercontent.com/argoproj/argo-cd/v1.8.3/manifests/install.yaml

kubectl -n argocd apply -f install.yaml

After the install is completed you should see several pods running:

kubectl -n argocd get pods
Screenshot+2021-01-27+at+14.08.50.jpg

Note that the name of the argocd-server pod is important because this is also the initial password of the admin user that is created by ArgoCD automatically. So be sure to copy and save the name. An easy command to get the name:

ARGOCD_PASSWORD=$(kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2)
echo $ARGOCD_PASSWORD

For the purpose of this tutorial we won’t set up any ingress controllers. You are free to do that yourself. In order to use the admin interface we will set up a local proxy.

Open up an extra terminal and run the command:

$ kubectl port-forward svc/argocd-server -n argocd 8080:443


Now you can navigate your browser to https://localhost:8080 and login with username admin and the password you copied earlier.

Patch ArgoCD

In order to instruct ArgoCD to use the role we defined earlier, we need to annotate the kubernetes service accounts ArgoCD used with the ARN of the role. The kubectl patch command provides us with an easy way to adjust a kubernetes resource:

kubectl -n argocd patch serviceaccount argocd-application-controller --type=json \
    -p="[{\"op\": \"add\", \"path\": \"/metadata/annotations/eks.amazonaws.com~1role-arn\", \"value\": \"arn:aws:iam::${AWS_ACCOUNT_ID}:role/ArgoCD\"}]"

kubectl -n argocd patch serviceaccount argocd-server --type=json \
    -p="[{\"op\": \"add\", \"path\": \"/metadata/annotations/eks.amazonaws.com~1role-arn\", \"value\": \"arn:aws:iam::${AWS_ACCOUNT_ID}:role/ArgoCD\"}]"

It is important that the annotations show the correct ARN of the ArgoCD Role otherwise ArgoCD won’t know which AWS IAM role to assume. Check the service accounts are changed correctly with:

kubectl -n argocd describe serviceaccount argocd-server

kubectl -n argocd describe serviceaccount argocd-application-controller

Patch the deployments to set the securityContext/fsGroup to 999 so the user of the docker image can actually use IAM Authenticator. You need this because the IAM Authenticator will try mount a secret on /var/run/secrets/eks.amazonaws.com/serviceaccount/token. If the correct fsGroup (999 corresponds to the argocd user) isn’t set, this will fail.

kubectl -n argocd patch deployment argocd-server --type=json  \
    -p='[{"op": "add", "path": "/spec/template/spec/securityContext/fsGroup", "value": 999}]'

kubectl -n argocd patch statefulset argocd-application-controller --type=json \
    -p='[{"op": "add", "path": "/spec/template/spec/securityContext/fsGroup", "value": 999}]'

After the patching of the deployment and the statefulset you should see the application-controller and argocd-server pods restart.

Take a look in the extra terminal you opened up earlier. The proxy server should be broken because the Pod restarted. Please restore it:

kubectl port-forward svc/argocd-server -n argocd 8080:443

Creating testing cluster

Provision the testing cluster

Now we’ve got a working management cluster with ArgoCD installed we need to start setting up our testing cluster where ArgoCD will deploy to. We’ll create another cluster with everything it needs again, so execute this command and go get yourself a cup of coffee:

eksctl create cluster --name testing

The management cluster we created earlier was created with an oidc provider. We won’t need that on the testing cluster. We do however need an AWS IAM Role capable to deploy applications inside this cluster. 

Create AWS IAM Role

We want the ArgoCD role to be able to assume the Deployer role.Thats why we also create a trust relationship for the ArgoCD role. (In a multi account setup you would change this trust relationship to reference the ArgoCD role in the account that holds the management cluster, and you would place the Deployer role in the same account as the testing cluster.)

read -r -d '' TRUST_TESTING <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::${AWS_ACCOUNT_ID}:role/ArgoCD"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}
EOF

echo "${TRUST_TESTING}" > trust-testing.json && cat trust-testing.json

Make sure the AWS_ACCOUNT_ID is the correct account for the ArgoCD Role using the following:

aws iam create-role --role-name Deployer --assume-role-policy-document file://trust-testing.json --description "IAM Role to be used by AWS to Deploy in the testing cluster"

eksctl create iamidentitymapping --cluster testing  --arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/Deployer --group system:masters --username deployer

You should now see something like:

[ℹ]  eksctl version 0.36.2
[ℹ]  using region eu-central-1
[ℹ]  adding identity "arn:aws:iam::123456789:role/Deployer" to auth ConfigMap

Deploy Guestbook application

Register testing cluster

We need to register the testing cluster with ArgoCD to be able to create applications that will deploy to it. We’ll do that by registering a secret with all the specific cluster details and deploying that secret into the management cluster.

To be able to do this we need:

  • Server: the http endpoint of the kubernetes api of the server

  • caData: the corresponding public certificate if the kubernetes api

  • roleArn: the role ArgoCD will have to assume to be able to deploy on this cluster.

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)

CLUSTER_ENDPOINT=$(aws eks describe-cluster --name testing --query "cluster.endpoint" --output text)

CLUSTER_CERT=$(aws eks describe-cluster --name testing --query "cluster.certificateAuthority.data" --output text)

read -r -d '' TESTING_CLUSTER <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: testing-cluster
  labels:
    argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
  name: testing
  server: ${CLUSTER_ENDPOINT}
  config: |
    {
      "awsAuthConfig": {
          "clusterName": "testing",
          "roleARN": "arn:aws:iam::${AWS_ACCOUNT_ID}:role/Deployer"
      },
      "tlsClientConfig": {
        "caData": "${CLUSTER_CERT}"
      }
    }
EOF

echo "${TESTING_CLUSTER}" > testing-cluster.yml && cat testing-cluster.yml

Confirm that the file shows the proper cluster roleArn, certificate and endpoint.

Switch the kubectl context back to the management cluster:

kubectl config use-context $(kubectl config get-contexts -o=name | grep management)

Finally register the cluster with ArgoCD:

kubectl -n argocd apply -f testing-cluster.yml

You should be able to find it in the UI of ArgoCD now, with a status of ‘unknown’.

Register Guestbook application

We’ll use a publicly available guestbook application to try out our setup. We can simply add the application and it’s public repository with a single argocd command:

Be sure the ArgoCD admin interface is available by opening up your extra terminal and running:

kubectl port-forward svc/argocd-server -n argocd 8080:443

Next login to ArgoCD with the CLI and create the guestbook application:

argocd login localhost:8080 --username admin --password ${ARGOCD_PASSWORD}

argocd app create guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --dest-namespace default --dest-name testing --directory-recurse

If you visit the UI you should see a guestbook application that is out of sync. Sync it to let ArgoCD create the kubernetes resources inside the testing cluster.

Now this is the basic working setup, you can finetune this in many ways. For example in the trust relationship of role/ArgoCD we’ve set it up so that all roles inside the argocd namespace are allowed to assume role/ArgoCD. You can change that so that only the specific argocd-server and application-controller roles are trusted. That however is an exercise left for the reader.

Clean up

eksctl delete cluster management

eksctl delete cluster testing

aws iam delete-role --role-name Deployer

aws iam delete-role-policy --role-name ArgoCD --policy-name AssumeRole

aws iam delete-role --role-name ArgoCD

Some resources may take a while to delete themselves, even after the delete commands return. This is due to the nature of how AWS cleans up resources like EC2 instances and security groups. You may want to check that any EKS clusters are deleted and any EC2 nodes are in the terminated state.

Congratulations if you made it all this way. I hope you found this tutorial useful.

Timothy Kanters - Modulo 2