This page looks best with JavaScript enabled

Wildcard Domain Certificate Using Route53 and Let's Encrypt

 ·   ·  ☕ 14 min read

Introduction

I have already posted about how we can automate installation of Jenkins & Nginx with Ansible. I have also done a post where I talk about how to enable HTTPS on a non-wildcard basis i.e. only for the root domain and not on subdomain.

Today I’ll go through how go get and configure a HTTPS certificate from Let’s Encrypt for all the subdomain. I’ll automate all these using Ansible.

Feel free to connect with me on LinkedIn.

Recap

We have previously done:

As a matter of fact, it’s very daunting nowadays to do Jenkins operations on a naked HTTP. I need some kind of privacy, and for that I’ll get a certificate from Let’s Encrypt to use it to enable HTTPS on my domain.

What you’ll need

To be able to enable HTTPS on a domain, you need:

  • A domain
  • A VPS with sudo access. I use Amazon EC2.
  • Some knowledge of nginx and reverse proxy.
  • Some knowledge of ansible (optional).

If you are new to Ansible, please refer to the Recap section.

What we’ll cover

This post is chunked into 3 parts:

You can also find the index on the right of the page if you are reading this post on a desktop.


What are roles?

I didn’t cover roles in previous post pretty well. Now that I have more knowledge of it than before. I’ll expand on that post.

Roles are nothing more than an organised directory structure. Everything directory has a significant, and it helps us deal with provisioning at scale.

Directory structure at the time of Hello World

Previously my ansible directory looked similar to this:

$ tree
.
├── ansible.cfg
├── inventory
├── jenkins.yml
├── nginx.conf
├── nginx.yml
└── README.md

0 directories, 6 files

That’s totally flat.

My nginx.yml looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
- name: install and start nginx
  hosts: web
  become: yes

  tasks:
    - name: install nginx
      command: amazon-linux-extras install nginx1 -y
    - name: copy config for jenkins routing
      copy:
        src: nginx.conf
        dest: '/etc/nginx/nginx.conf'
        mode: preserve
      notify: restart nginx
    - name: start nginx
      service:
        name: nginx
        state: started
    - name: get public IP
      uri:
        url: https://api.ipify.org?format=json
        method: Get
      changed_when: false
      register: public_ip
    - name: print public IP
      debug:
        msg: "{{ public_ip.json.ip }}"
  
  handlers:
    - name: restart nginx
      service:
        name: nginx

Directory structure after learning about Roles

Roles are all about directory structures. See my directory tree listing below:

├── ansible.cfg
├── inventory
├── playbooks
│   ├── jenkins.yml
│   └── nginx.yml
├── README.md
└── roles
    ├── jenkins
    │   └── tasks
    │       └── main.yml
    └── nginx
        ├── files
        │   └── nginx.conf
        ├── tasks
        │   └── main.yml
        └── handlers
            └── main.yml

8 directories, 9 files

If I take the example of nginx here, the whole thing is divided into 4 files. I present the listing with file content.

playbooks/nginx.yml

1
2
3
4
5
6
7
---
- name: install and start nginx
  become: yes
  hosts: 
    - web
  roles:
    - nginx

A couple things:

  1. This looks familiar to our pre roles era nginx yaml file. Yes! we had hosts section listed at top. But now we have a section called roles. And you know what’s good thing about roles? You can have multiple of them in a single playbook.

  2. Each roles listed here is mapped to a directory inside the roles directory.

  3. You can have your roles named anything else, you just have to override roles_path config inside ansible.cfg.

roles/nginx/tasks/main.yml

The first the first file I’ll discuss. Ordering matters here as I’m teaching transitioning between pre roles era monolith yaml file into logical chunks.

First let’s see the contents:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
- name: install nginx
  command: amazon-linux-extras install nginx1 -y
- name: copy config
  copy: "src=nginx.conf dest='/etc/nginx/nginx.conf' mode=preserve"
  notify: restart nginx
- name: start nginx
  service: "name=nginx state=started"
- name: get public IP
  uri:
    url: https://api.ipify.org?format=json
    method: Get
  changed_when: false
  register: public_ip
- name: print public IP
  debug:
    msg: "{{ public_ip.json.ip }}"

Significant amount of change here. Most common of all is that I have switched from a multiple lines of declaration to a single line declaration for a module. This is to preserve space.

Some more changes below:

  1. Please note on the location of this file in the hierarchy. It says tasks. It has a file called main.yaml.
  2. Everything in monolith nginx.yaml from the sections tasks is listed here. Nothing more. It has to be tasks block.
  3. main.(yml|yaml) is the default file which ansible interpreter looks for when scanning sub-roles directories. This also means that you can have more than one file inside any of sub-roles directory.
  4. tasks/main.yml can have conditionals. This hepls in the case where you want to write a cross-platform playbook. Now that apt and yum are two different package managers. You can actually run a command to check the platform and import platform specific tasks file from the directory.

roles/nginx/handlers/main.yml

Nothing fancy here. We have handlers section from the monolith written here.

1
2
3
4
5
6
---
- name: restart nginx
  service:
    name: nginx
    state: restarted
    enabled: yes

roles/nginx/files/nginx.conf

We have nothing fancy here. This is the same file we created in the last post.

At the end, I would ask you to refer to Role directory structure, because files, tasks and handlers are not the only sub-directories which are allowed in the roles directory.

When everything is setup, this is how the whole thing is invoked:

ansible-playbook -i inventory playbooks/nginx.yml

In the same way, I am leaving this onto you to create jenkins role from the monolith playbook.

We’ll move to next next section now, which is about enabling HTTPS on nginx. But before you move to next section, run those playbooks on the host to have nginx and jenkins installed.


Enabling HTTPS on domain(s), manually

Before I start this section, I have these already available to me.

  • A spare domain called santosh.pictures, which resides on AWS Route53.
  • A EC2 host running nginx which is publically facing to world on port 80, and jenkins which is reversed proxied by nginx; originally running on port 8080, but routed to /.
  • A public hosted zone in which I have record for ci.santosh.pictures which points to above nginx instance.

In fact, I can access the non-https version of website when I go to ci.santosh.pictures.

Non-HTTPS version of ci.santosh.pictures
Non-HTTPS version of ci.santosh.pictures

What I want here is to have this Jenkins available on https://ci.santosh.pictures.

Step 1: Install Certbot and Route53 authenticator

Trivia: Both ansible and certbot are written in Python.

What is certbot?

certbot is a program written by EFF to obtain certs from Let’s Encrypt and (optionally) auto-enable HTTPS on your server.

certbot talk’s with Let’s Encrypt which is a certificate authority which issues X.509 certificates which in turn are used in Internet protocols such as TLS/SSL, which is the basis for HTTPS, the secure protocol for browsing the web.

Install certbot

I have been using Amazon Linux 2 till now. After a lot of hit and trial I believe that it’s straightforward to install and configure certbot on Debian because of it’s relatively new packages.

Currently I’m using Ubuntu 20.04 so I’ll do this:

sudo apt install certbot

This installs certbot for Python 3. Nn the other hand Amazon Linux 2 installs for Python 2 which kinda messes things up.

Installing certbot would be enough if we were not doing wildcard certs. But that’s not the case here.

ubuntu@ip-10-2-1-10:~$ certbot plugins

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
* standalone
Description: Spin up a temporary webserver
Interfaces: IAuthenticator, IPlugin
Entry point: standalone = certbot.plugins.standalone:Authenticator

* webroot
Description: Place files in webroot directory
Interfaces: IAuthenticator, IPlugin
Entry point: webroot = certbot.plugins.webroot:Authenticator
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Install route53 authenticator

Installing route53 authenticator plugin is yet again simpler on Debian:

sudo apt install python3-certbot-dns-route53
ubuntu@ip-10-2-1-10:~$ certbot plugins

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
* dns-route53
Description: Obtain certificates using a DNS TXT record (if you are using AWS
Route53 for DNS).
Interfaces: IAuthenticator, IPlugin
Entry point: dns-route53 = certbot_dns_route53.dns_route53:Authenticator

* standalone
Description: Spin up a temporary webserver
Interfaces: IAuthenticator, IPlugin
...

Step 2: Configure Route53 authenticator

Congrats! When you installed route53 authenticator, and with that AWS SDK for Python is also installed (officially known as boto) as dependency. These are precursor to some AWS IAM stuff we’re going to commence.

As a best practice, we are going to create a IAM user and only give required permission to the user to perform this operation.

Create User

$ aws iam create-user --user-name certbot-route53
{
    "User": {
        "Path": "/",
        "UserName": "certbot-route53",
        "UserId": "AIDARIMALWMXWEXAMPLES",
        "Arn": "arn:aws:iam::XXXXXXXXXXXX:user/certbot-route53",
        "CreateDate": "2021-11-26T00:14:20+00:00"
    }
}

You need this user to have the following permission:

  • route53:ListHostedZones
  • route53:GetChange
  • route53:ChangeResourceRecordSets

We’ll create policy with these permissions and attach that policy to the user.

Create Policy

Create a file called policy.txt and have these lines of JSON written to it.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
{
  "Version": "2012-10-17",
  "Id": "certbot-dns-route53 sample policy",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "route53:ListHostedZones",
        "route53:GetChange"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "route53:ChangeResourceRecordSets"
      ],
      "Resource": [
        "arn:aws:route53:::hostedzone/YOURHOSTEDZONEID"
      ]
    }
  ]
}

Be sure to replace YOURHOSTEDZONEID to your actual hosted zone. You can find this on https://console.aws.amazon.com/route53/v2/hostedzones in the last column for your hosted zone.

Here I’m creating a policy with the name route53-santosh.pictures:

$ aws iam create-policy --policy-name route53-santosh.pictures --policy-document file://policy.txt
{
    "Policy": {
        "PolicyName": "route53-santosh.pictures",
        "PolicyId": "ANPARIMALWEXAMPLE4DPU",
        "Arn": "arn:aws:iam::XXXXXXXXXXXX:policy/route53-santosh.pictures",
        "Path": "/",
        "DefaultVersionId": "v1",
        "AttachmentCount": 0,
        "PermissionsBoundaryUsageCount": 0,
        "IsAttachable": true,
        "CreateDate": "2021-11-26T00:41:24+00:00",
        "UpdateDate": "2021-11-26T00:41:24+00:00"
    }
}

Please make note of ARN in the response JSON. We’ll need that in the next step.

Attach IAM Policy to IAM User

We have the policy and the user created. Now the next step is to attach policy to the user so that user have permission to do the Route53-ish stuff.

$ aws iam attach-user-policy --policy-arn arn:aws:iam::XXXXXXXXXXXX:policy/route53-santosh.pictures --user-name certbot-route53

This command does not responds with anything, but you can check that policy is attached by invoking list-attached-user-policies and looking up the PolicyName in the output:

$ aws iam list-attached-user-policies --user-name certbot-route53
{
    "AttachedPolicies": [
        {
            "PolicyName": "route53-santosh.pictures",
            "PolicyArn": "arn:aws:iam::XXXXXXXXXXXX:policy/route53-santosh.pictures"
        }
    ]
}

With this done, we can proceed to the next step which is about creating access key and putting it in appropriate place for AWS SDK to function.

Create access key

We need access key and access key secret to programmatically talk with AWS. We can create access key with AWS CLI like so:

$ aws iam create-access-key --user-name certbot-route53
$ aws iam create-access-key --user-name certbot-route53
{
    "AccessKey": {
        "UserName": "certbot-route53",
        "AccessKeyId": "AKIARIMALWMEXAMPLE73",
        "Status": "Active",
        "SecretAccessKey": "VKD94MARJeztTFJlCWK0F/E6vTaEiPEXAMPLEKEY",
        "CreateDate": "2021-11-26T03:28:29+00:00"
    }
}

Note down AccessKeyId and SecretAccessKey.

Configure AWS SDK

Did I tell you that AWS SDK for Python is downloaded as part of downloading Route53 authenticator plugin? For this SDK to work properly, we need to setup the AccessKeyId and SecretAccessKey retrieved from previous section.

There are quite a few ways we can configure the keys, but I like setting up ~/.aws/config. Here is my config file:

1
2
3
[default]
aws_access_key_id=AKIARIMALWMEXAMPLE73
aws_secret_access_key=VKD94MARJeztTFJlCWK0F/E6vTaEiPEXAMPLEKEY

Note: In next section, we are going to run certbot as sudo, please put the config file in $HOME of root user.

Step 3: Get certificate for your domain & subdomains from Let’s Encrypt

That was a long marathon for configuring the Route53 authenticator plugin. Next we get the certificate. Please note that I’m only generating certificate in this step and will configure nginx separately.

Get the certificate

Switch to root user and make sure AWS credentials exists by running ls ~/.aws.

ubuntu@ip-10-2-1-10:~$ sudo -i
root@ip-10-2-1-10:~# ls ~/.aws
config

If the output of ls ~/.aws is ls: cannot access '/root/.aws': No such file or directory, please check the end of the last section.

Now let’s proceed with certbot as we are already in interactive shell as root. Following the the one liner I’m gonna use.

# certbot certonly --dns-route53 --email '[email protected]' --domain 'santosh.pictures' --domain '*.santosh.pictures' --agree-tos --non-interactive 

Replace [email protected] with your actual email. This email is used to send notifications when expiration date of certs is close. The above command can also be reduced to following:

# certbot certonly --dns-route53 -m '[email protected]' -d 'santosh.pictures' -d '*.santosh.pictures' --agree-tos -n

The output of above command looks something like this:

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Credentials found in config file: ~/.aws/config
Plugins selected: Authenticator dns-route53, Installer None
Obtaining a new certificate
Performing the following challenges:
dns-01 challenge for santosh.pictures
dns-01 challenge for santosh.pictures
Waiting for verification...
Cleaning up challenges

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/santosh.pictures/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/santosh.pictures/privkey.pem
   Your cert will expire on 2022-02-24. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

Congrats! You have generated TLS certificates to be used with a web server (/etc/letsencrypt/live/santosh.pictures/fullchain.pem). Along with certs, we also have the private key (/etc/letsencrypt/live/santosh.pictures/privkey.pem).

You can find more configuration options to use certbot on it’s documentation page: https://eff-certbot.readthedocs.io/en/stable/using.html

Step 4: Configure nginx with HTTPS

We need to tweak our nginx.conf a little bit. Before that, this is the version of nginx.conf with no https configured. This is also available at https://github.com/santosh/ansible/blob/v0.1.0/roles/nginx/files/nginx.conf:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80 default_server;
        listen       [::]:80 default_server;
        server_name  _;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {
            proxy_pass http://localhost:8080/;
        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }
}

Now instead of the whole content + the https config. Here I present you the diff:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
diff --git a/roles/nginx/files/nginx.conf b/roles/nginx/files/nginx.conf
index f1dd861..c63c37c 100644
--- a/roles/nginx/files/nginx.conf
+++ b/roles/nginx/files/nginx.conf
@@ -40,6 +40,17 @@ http {
         # Load configuration files for the default server block.
         include /etc/nginx/default.d/*.conf;

+        listen       443 ssl;
+
+        ssl_certificate       /etc/letsencrypt/live/santosh.pictures/fullchain.pem;
+        ssl_certificate_key   /etc/letsencrypt/live/santosh.pictures/privkey.pem;
+
+        # redirect non-https traffic to https
+        if ($scheme != "https") {
+            return 301 https://$host$request_uri;
+        }
+
         location / {
             proxy_pass http://localhost:8080/;
         }

Explanation:

  • Line 9 says along with listening on port 80, also listen on port 443.
  • Line 11,12 specifies the path to certificate and private key we fetched in last section.
  • Line 15-17 issue a 301 response to any client visiting http version and then redirect to https version.

With these changes in place, and after reloading the nginx service. I can see the https enabled on my site.

HTTPS version of ci.santosh.pictures
HTTPS version of ci.santosh.pictures

Automating HTTP to HTTPS transition with Ansible

When I started writing this section, I started with a dilemma. I have a nginx role, and then a jenkins role, then I have a dream of enabling HTTPS. In which role does this automation of this https goes? Or do I create some other Role for this?

And after giving a lot of thought I came to the conclusion that it is not the right time to write about this automation. And I’ll cover this automation when I learn more about Ansible. Topics like variables, ansible-vault are important to secure this repository I’m working with, as it is publicly exposed. Variables are important for dynamic behaviour which this repo needs.

And just like I kept Ansible Roles for this post in the previous post, I’m keeping variables and ansible-vault for the next post where I’ll expand this topic.

Update: The next post is now out: https://santoshk.dev/posts/2022/automate-https-certificates-with-ansible-roles/

Conclusion

I have reached milestone in learning Ansible. I started feel need for Ansible when I wanted to configure my own Jenkins server. When I configured my Jenkins server I also realized that without TLS it’s unsafe to do Jenkins operations. This also attracted me to learn more about cyber security in general.

I can proceed with my Jenkins work now. Along with that I’ll keep exploring Ansible and cyber security.

There are a lot more to cover in nginx and jenkins configuration. I’ll for sure cover them in some other posts. Don’t forget to subscribe to the newsletter so that you are updated when new post comes out.

Share on

Santosh Kumar
WRITTEN BY
Santosh Kumar
Santosh is a Software Developer currently working with NuNet as a Full Stack Developer.