External DNS lets us define public DNS entries in our Kubernetes configuration, and has the cluster create those DNS entries via Route53 (or whatever) API when they’re applied.
Create an IAM role with permission to create DNS entries #
We already created this in cert-manager.
Configure Route53 credentials #
Create a secret with route53 credentials. This is slightly different in structure to the cert-manager secret, but (at least for my use case) contains the same credential. The secret we have to make is in the format of the aws credentials file. You can make it like this:
cat > aws.creds.txt <<EOF
[default]
aws_access_key_id = xxxxxxxx
aws_secret_access_key = yyyyyyyy
EOF
credsfile="$(base64 aws.creds.txt -w 0)"
cat > manifests/crust/external-dns/secrets/aws-route53-credential.yaml <<EOF
kind: Secret
apiVersion: v1
type: Opaque
metadata:
name: external-dns-aws-credential-secret
namespace: external-dns
data:
credentials: $credsfile
EOF
rm aws.creds.txt
sops --encrypt --in-place manifests/crust/external-dns/secrets/aws-route53-credential.yaml
An annoyed aside
The docs in the external-dns example parameters file were not clear to me. They say
## AWS configuration to be set via arguments/env. variables ## aws: ## AWS credentials ## @param aws.credentials.secretKey When using the AWS provider, set `aws_secret_access_key` in the AWS credentials (optional) ## @param aws.credentials.accessKey When using the AWS provider, set `aws_access_key_id` in the AWS credentials (optional) ## @param aws.credentials.mountPath When using the AWS provider, determine `mountPath` for `credentials` secret ## credentials: secretKey: "" accessKey: "" ## Before external-dns 0.5.9 home dir should be `/root/.aws` ## mountPath: "/.aws" ## @param aws.credentials.secretName Use an existing secret with key "credentials" defined. ## This ignores aws.credentials.secretKey, and aws.credentials.accessKey ## secretName: ""
You’d think you might create a secret object with a credentials
key containing secretKey
and accessKey
, right?
But if you do, that’s actually an invalid secret –
secrets can contain only key:value pairs, no nested objects –
and you’ll get an error about unrecognized type: string
.
Upon realizing this, you might think to use secretKey
and accessKey
directly,
but this is also wrong.
You will see errors with a command like
kubectl logs external-dns-7d69b5b986-2cqgj -n external-dns -f --since 10m
,
and they will contain lines like
time="2023-01-28T05:18:42Z" level=error msg="records retrieval failed: failed to list hosted zones: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors"
.
Instead, you need to set a data
(not stringData
!) secret,
with a key called credentials
that has a value of
a base64-encoded AWS credentials file,
as we did above.
Then you have to mount that inside the external-dns container.
This is explained a bit better in the external-dns documentation for AWS. Static credentials and Manifest (for clusters without RBAC enabled). The documentation and most uses of the AWS DNS provider for external-dns mostly seem to be about using IAM credentials inside an AWS-hosted EKS cluster. We have to use static credentials because our cluster is bare metal.
Save this as
♘
kubernasty/manifests/crust/external-dns/aws-route53-credential.example.yaml
,
and modify it to contain a real credential.
Then encrypt it with sops
:
sops --encrypt --in-place kubernasty/manifests/crust/external-dns/aws-route53-credential.yaml
Make sure not to commit any unencrypted credentials files.
WARNING: If you change ever the secret, including during initial troubleshooting, you may need to kill the external-dns pod to get it to pick it up. You can do that by finding the replicaset name withkubectl get replicaset -n external-dns
, and then deleting it withkubectl delete replicaset <replica-set-name> -n external-dns
. Flux will automatically redeploy the replicaset deleted this way, which will mount your secrets file before starting a new external-dns process, which will ensure that it picks up the latest.
Configure and deploy external-dns #
- I followed https://geek-cookbook.funkypenguin.co.nz/kubernetes/external-dns/.
- It is also worth looking at the readme for the bitnami external-dns chart at https://github.com/bitnami/charts/tree/main/bitnami/external-dns.
- The external-dns documentation for AWS
was also helpful, but note that the bitnami chart handles some things for you.
For instance, specifying
secretName
to the chart the way we did handles mounting the credentials file as a volume in the container.
Create
♘
kubernasty/manifests/crust/external-dns/configmaps/configmap-overrides.yaml
.
I also added configmap.yaml.dist.txt
containing the entirety of the
external-dns parameters.
This way when I’m upgrading external-dns I can diff the defaults I configured previously
and the new version’s defaults.
I prefer this to inlining the entire default parameter list into the overrides file
since it makes it much easier to see what I’m trying to configure.
We have a cluster primary endpoint we create in
♘
kubernasty/manifests/crust/external-dns-endpoints/
;
this lets us confirm that everything really is working.
It will also be the target for app-specific CNAMEs going forward.
Then commit everything to git, push, and wait for Flux to apply.