Using Terraform with AWS for learning Kubernetes the hard way.
In terraform.tf
, set your Terraform Cloud organization
and workspace name:
cloud {
organization = "<your organization name>"
workspaces {
name = "<your workspace name>"
}
}
Alternatively to setting up Terraform Cloud, please refer to the Authentication and Configuration section of the Terraform AWS Provider documentation.
Requirements: ssh-keygen
Following the instructions in the AWS documentation
Create a key pair using a third-party tool and import the public key to Amazon EC2,
we generate a public/private key pair using ssh-keygen
:
# Length = 4096
# Format = PEM
# Type = RSA
# Filename = access-key
# No password
ssh-keygen -b 4096 -m PEM -t rsa -N "" -f access-key
and import it to EC2 using Terraform:
# main.tf
...
resource "aws_key_pair" "access_key" {
key_name = "<your access key name>"
public_key = "<content of generated file access-key.pub>"
}
Requirements: Terraform CLI
# Login to you Terraform Cloud
terraform login
# Then apply the infrastructure to your account
terraform apply
Requirements: Cloudflare's CFSSL
The following steps are condensed in the bootstrap.sh script for quick setup.
After creating the infrastructure in the previous step Create the infrastructure, make
sure to cd
into certificates/
, then run the following:
- CA certificate: Use the provided CSR config file to generate a CA certificate and private key.
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
- Client certificates: Generate a client certificate and private key for each Kubernetes component.
- The
admin
client certificate and private key:
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
- The Kubelet client certificates and private keys:
# Examine the gen script then execute it
./gen-kublet-client-cert.sh
# Result
worker-0.pem
worker-0-key.pem
# same for all workers 0, 1, ...
- The
kube-controller-manager
client certificate and private key:
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
- The
kube-proxy
client certificate and private key:
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
- The
kube-scheduler
client certificate and private key:
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
- Kubernetes API server certificate: The project's static IP (AWS Elastic IP) will be added to the list of SANs for the Kubernetes API server certificate to ensure the certificate is validated by remote clients.
# Examine the gen script then execute it
./gen-kubernetes-api-server-cert.sh
- Service Account key pair: The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the managing service accounts documentation.
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
Requirements: scp, Terraform CLI
We will use the access key access-key.pem
created in the previous step
Generate access key pair to copy the certificates to the host instances via SSH.
Make sure to cd
into certificates/
, then run the following:
- Certificates and private keys to the worker instances:
# Examine the copy script then execute it
# You can provide the path to your EC2 access key if it is different from access-key.pem at the project root directory
./copy-workers-certs.sh [path-to-access-key.pem]
- Certificates and private keys to the controller instances:
# Examine the copy script then execute it
# You can provide the path to your EC2 access key if it is different from access-key.pem at the project root directory
./copy-controllers-certs.sh [path-to-access-key.pem]
Requirements: kubectl, Terraform CLI
The following steps are condensed in the bootstrap.sh script for quick setup.
We will generate the Kubernetes configuration files that enable the Kubernetes clients to locate and authenticate to the Kubernetes API servers.
Make sure to cd
into configs/
, then run the following:
- The Kubelets configuration files:
# Examine the gen script then execute it
# You can provide the path to the 'certificates' directory if it is different from certificates/ at the project root directory
./gen-kubelets-kubeconfig.sh [path-to-certificates-directory]
- The
kube-proxy
configuration file:
# Examine the gen script then execute it
# You can provide the path to the 'certificates' directory if it is different from certificates/ at the project root directory
./gen-kube-proxy-kubeconfig.sh [path-to-certificates-directory]
- The
kube-controller-manager
configuration file:
# Examine the gen script then execute it
# You can provide the path to the 'certificates' directory if it is different from certificates/ at the project root directory
./gen-kube-controller-manager-kubeconfig.sh [path-to-certificates-directory]
- The
kube-scheduler
configuration file:
# Examine the gen script then execute it
# You can provide the path to the 'certificates' directory if it is different from certificates/ at the project root directory
./gen-kube-scheduler-kubeconfig.sh [path-to-certificates-directory]
- The
admin
configuration file:
# Examine the gen script then execute it
# You can provide the path to the 'certificates' directory if it is different from certificates/ at the project root directory
./gen-admin-kubeconfig.sh [path-to-certificates-directory]
Requirements: scp, Terraform CLI
We will use the access key access-key.pem
created in the previous step
Generate access key pair to copy the Kubernetes configuration files to the
host instances via SSH.
Make sure to cd
into configs/
, then run the following:
- Kubernetes configuration files to the worker instances:
# Examine the copy script then execute it
# You can provide the path to your EC2 access key if it is different from access-key.pem at the project root directory
./copy-workers-kubeconfig.sh [path-to-access-key.pem]
- Kubernetes configuration files to the controller instances:
# Examine the copy script then execute it
# You can provide the path to your EC2 access key if it is different from access-key.pem at the project root directory
./copy-controllers-kubeconfig.sh [path-to-access-key.pem]
We will generate the encryption key and config used by Kubernetes to encrypt secrets.
Make sure to cd
into encryption/
, then run the following:
# Generate a key
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
# Create a config file
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: $ENCRYPTION_KEY
- identity: {}
EOF
Requirements: scp, Terraform CLI
We will use the access key access-key.pem
created in the previous step
Generate access key pair to copy the encryption configuration file to the
host instances via SSH.
Make sure to cd
into encryption/
, then run the following:
# Examine the copy script then execute it
# You can provide the path to your EC2 access key if it is different from access-key.pem at the project root directory
./copy-controllers-encryption-config.sh [path-to-access-key.pem]
From the root project directory, we will set the controller instances to restart with a user-data script that contains all the instructions to download, configure, and start ETCD and the Kubernetes components on every controller instance.
The user data file controller-user-data.sh contains a first part that instructs cloud-init to run the script on restart (default is to run on initial launch). See this AWS tutorial for more details:
ETCD requires a mapping of hosts that are in the cluster <controller-i> -> https://<host-i-private-ip>:2380
. The
steps below show you how to generate this mapping and replace it in the user data script so that it's available
to ETCD (the variable ETCD_INITIAL_CLUSTER
contains initially a placeholder value).
Same thing for the parameters for the Kubernetes API server, it requires a few parameters that will be set by fetching the configuration from Terraform and replacing placeholders in the user data script.
To bootstrap the ETCD and Kubernetes cluster, run the following:
# Generate a comma-separated list of ETCD cluster mappings <controller-i> -> https://<host-i-private-ip>:2380
ETCD_INITIAL_CLUSTER=$(terraform output -json | jq -r '.kubernetes_controllers_private_ip_addresses.value | to_entries | map("\(.key)=https://\(.value):2380") | join(",")')
KUBERNETES_ETCD_SERVERS=$(terraform output -json | jq -r '.kubernetes_controllers_private_ip_addresses.value | to_entries | map("https://\(.value):2379") | join(",")')
# Replace in the user data script
sed -i "s+ETCD_INITIAL_CLUSTER_PLACEHOLDER+$ETCD_INITIAL_CLUSTER+" controller-user-data.sh
sed -i "s+KUBERNETES_ETCD_SERVERS_PLACEHOLDER+$KUBERNETES_ETCD_SERVERS+" controller-user-data.sh
## macOS users need to add a "" after -i
sed -i "" "s+ETCD_INITIAL_CLUSTER_PLACEHOLDER+$ETCD_INITIAL_CLUSTER+" controller-user-data.sh
sed -i "" "s+KUBERNETES_ETCD_SERVERS_PLACEHOLDER+$KUBERNETES_ETCD_SERVERS+" controller-user-data.sh
# Get the Kubernetes public IP
KUBERNETES_PUBLIC_IP=$(terraform output --json | jq -r '.kubernetes_public_ip_address.value')
# Replace in the user data script
sed -i "s+KUBERNETES_PUBLIC_IP_PLACEHOLDER+$KUBERNETES_PUBLIC_IP+" controller-user-data.sh
## macOS users need to add a "" after -i
sed -i "" "s+KUBERNETES_PUBLIC_IP_PLACEHOLDER+$KUBERNETES_PUBLIC_IP+" controller-user-data.sh
# Get the count of Kubernetes controllers
CONTROLLERS_COUNT=$(terraform output --json | jq -r '.kubernetes_controllers_count.value')
# Replace in the user data script
sed -i "s+CONTROLLERS_COUNT_PLACEHOLDER+$CONTROLLERS_COUNT+" controller-user-data.sh
## macOS users need to add a "" after -i
sed -i "" "s+CONTROLLERS_COUNT_PLACEHOLDER+$CONTROLLERS_COUNT+" controller-user-data.sh
# Apply the infrastructure to ship the user data script to controller instances (without replacing the instances)
# Make sure to verify the parameters have been correctly set in controller-user-data.sh
terraform apply -var "install_controller_user_data=true"
# SSH into one of the controller instances
ssh -i access-key.pem ubuntu@<controller public address>
# Verify ETCD is running
ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem
# Verify Kubernetes processes are running
systemctl status kube-apiserver kube-controller-manager kube-scheduler