Heading image for post: Automate Your Elixir Deployments - Part 1 - Ansible

Elixir Phoenix DevOps

Automate Your Elixir Deployments - Part 1 - Ansible

Profile picture of Dorian Karter

This post will guide you through automating a "bare-metal" machine configuration, and getting a server ready for building and deploying Elixir / Phoenix applications (with LiveView support!)

There comes a time in every application’s life when, as a developer, you want to share it with the world. There are some easy solutions out there for deploying Elixir, most notably Gigalixir or Heroku. However, when something goes wrong you may want to solve the issue yourself, not get on long support calls or email chains. Knowledge is power, and understanding all the moving pieces of deploying to “bare metal” gives you a powerful skill-set that will allow you to deliver more customized solutions, without the black-box limitations of Platform-as-a-Service software.

In a previous post, I demonstrated how we can automate the creation of infrastructure. This guide will build on that idea.

You can follow along using the companion repository on Github.

Target Audience - Who Is This Post For

This post is targeted towards Elixir Developers interested in deploying their application onto a Linux box on a cloud provider of their choice. Automating this process shortens iteration cycles so that the process is easily repeatable. This makes scaling, replicating, and fixing issues a breeze. In the process I hope to demystify "DevOps" and empower developers to be more comfortable with Linux, Nginx and automation tools like Ansible.

We are going to set up a machine with automatically renewable SSL certificates from Let's Encrypt and unattended security upgrades so that maintenance is kept to a bare minimum. In most cases, there shouldn't be a need to SSH into it once everything is up and running.

Prerequisites - Tools We Will Use

In this post we will set up a Debian based machine, in this case Ubuntu, to host our Elixir application, so a prerequisite will be creating one on a cloud platform such as Digital Ocean. I've used a $5/month machine with 1 CPU and 1GB RAM, and it is very capable and sufficient for running a Phoenix application. See my previous post if you want to quickly spin one up.

I recommend setting up your SSH Config file to point to the IP address or domain of the machine we will be deploying to:

Host example
  User root
  HostName example.com # or IP address
  IdentityFile ~/.ssh/your_ssh_key

Throughout this post I've used example and example.com as the project name and domain respectively. You will need to make sure to replace those for filenames and other places where the word example is being used.

You will need root access to set up many of the pieces in this tutorial. For simplicity purposes I am assuming you already have access to the root user.

We will use Ansible, a Configuration Management tool, to set up the machine, install dependencies etc.

Then to deliver a release of our application to the server, we will use eDeliver with Distillery. While it is possible to use Ansible to deliver the code, I believe in using the best tool for the job. Ansible is great for configuring a machine, but eDeliver and Distillery are more well suited for Elixir / Erlang's specific build and delivery requirements.

With all of that out of the way, let's get started!

Setting Up Our Ansible Project

First we will need to create a directory for our Ansible project. In my project I placed it under ansible/.

You will need a configuration file for Ansible. For now we will not get too deep into how the Ansible configuration works, you can read more about that here, instead we will use some sensible defaults.

Feel free to copy mine and save it into ansible/ansible.cfg:

[defaults]
nocows=1
inventory = inventories/production
log_path = /tmp/ansible.log
retry_files_enabled = True
retry_files_save_path = tmp
roles_path = galaxy_roles:roles
callback_whitelist = timer, profile_tasks
stdout_callback = skippy
gathering = smart

[ssh_connection]
ssh_args = -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=30m
pipelining = True
control_path = /tmp/ansible-ssh-%%h-%%p-%%r

You will need to create a few directories inside the ansible directory to keep things organized:

$ mkdir -p inventories/group_vars/{all,application/secret} playbooks/templates tmp

This will create a basic directory structure:

❯ tree
.
├── inventories
│   └── group_vars
│       ├── all
│       └── application
│           └── secret
├── playbooks
│   └── templates
└── tmp

NOTE: If you are using git to store your project, I suggest adding ansible/tmp to your .gitignore since Ansible will store some 'retry' files there:

$ echo 'tmp/' >> .gitignore

Next we need to create a main.yml file, this will be our entry point for the script that will import all of the smaller playbooks and execute them in order.

ansible/main.yml:

---
- hosts: application

  remote_user: root

We have set the hosts to application - that is our target group - a group of hosts that we will run the tasks on. We will discuss how to set up target groups in the next section.

Inventory

Ansible needs to know about all the machines it will be targeting. It keeps track of these machines in an inventory. For this project, we have one environment, production, and one machine in that environment example.

Let's create our production inventory in ansible/inventories/production:

[application]
example

We are using the host example which should match the SSH Config entry we added in a previous step, and putting it under the application group.

Configuring Machine Login

In this step we will setup a deploy user that will build, deploy and run our application. In addition, we will implement a few hardening steps to ensure your machine's SSH Server is more secure from brute-force attacks.

Many worms, scanners, and botnets scan the entire Internet looking for SSH logins, so it's always a good idea to reduce the risk by disabling password authentication over SSH and using a proactive log analyzer such as Fail2Ban.

First we need to create some variables to make things easy to refactor and maintain. In your ansible/inventories/group_vars/all directory, create a new file ansible/inventories/group_vars/all/all.yml:

---
username: deploy
app_name: example
domain: example.com

These variables will be shared across all your Playbooks (and feel free to change them according to your needs. e.g. There is no requirement for the user to be called deploy).

Next, create a new Playbook and add some tasks in ansible/playbooks/configure-login.yml:

---
- hosts: application

  remote_user: root

  tasks:
    - name: Create Deploy User
      user:
        name: '{{ username }}'
        createhome: yes
        state: present
        shell: /bin/bash
      register: deployuser

    - name: Disable password for deploy on creation
      # this will set the password to something untypable and random essentialy
      # preventing password login for this user
      shell: /usr/bin/passwd -l '{{ username }}'
      # this line tells Ansible to only run this task if the deployuser we
      # defined above has changed
      when: deployuser.changed

    - name: Deploy SSH Key
      authorized_key:
        user: '{{ username }}'
        # you would need to change this line to point to your public key
        key: "{{ lookup('file', '~/.ssh/your_ssh_key.pub') }}"
        state: present

    - name: Disable Password Authentication
      lineinfile:
        # completely disables password authentication for ssh, so make sure your
        # root user is set up to connect with a key, not a password!
        dest: /etc/ssh/sshd_config
        regexp: '^PasswordAuthentication'
        line: "PasswordAuthentication no"
        state: present
        backup: yes
      notify: restart ssh

  handlers:
    - name: restart ssh
      service:
        name: sshd
        state: restarted

A really cool feature of Ansible is its ability to replace a line in a file (see lineinfile above under "Disable Password Authentication"). It allows you to run a sed-like command to search with regex, and replace a line, but will only run if the line is not already present. You can also pass it backup: yes to create a backup file, just in case the replacement did not go as planned.

Also notice the handlers section. You can think of those as functions that are reusable throughout your Playbook. In this case we created one for restarting the sshd service after making changes to its configuration file, and we call it in the notify action of the "Disable Password Authentication" task.

Notify actions will only be triggered once even if notified by multiple different tasks.

Finally, we'll import that playbook in our ansible/main.yml file:

---
- hosts: application

  remote_user: root

- name: Configure Machine Login
  import_playbook: playbooks/configure-login.yml

To test everything, run:

$ ansible-playbook main.yml

If everything worked you should be able to log into the machine using our newly created user like so:

$ ssh deploy@example -i ~/.ssh/your_ssh_key

Install Packages

Our next step is to install some packages from the operating system package manager, in this case apt on Ubuntu. Some of those are optional, so feel free to drop them, and depending on your use case you may want to add more. Ansible comes with built-in support for apt so installing packages is a breeze.

We will also utilize Ansible Roles, more specifically Galaxy Roles. You can think of Roles as packages / dependencies; they are groupings of Ansible vars, tasks and handlers. Ansible Galaxy is Ansible's package repository where you can find many different roles for automating common complex tasks.

First, let's create our new Playbook in ansible/playbooks/install-packages.yml:

---
- hosts: application
  vars:
    - packages:
      # Scans system access logs and bans IPs that show malicious signs
      - fail2ban
      # For building with eDeliver
      - git
      # For compiling assets using webpack
      - nodejs
      - npm
      # For reverse proxy into our application
      - nginx

  remote_user: root

  tasks:
    - name: Update APT package cache
      apt:
        update_cache: yes
        cache_valid_time: 3600

    - name: Install required packages
      apt:
        state: present
        pkg: "{{ packages }}"

    - name: Check if Erlang is Installed
      command: dpkg-query -W esl-erlang
      register: erlang_check_deb
      failed_when: erlang_check_deb.rc > 1
      changed_when: erlang_check_deb.rc == 1

    - name: Download erlang.deb
      get_url:
        url: "https://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb"
        dest: "/home/{{ username }}/erlang-solutions_1.0_all.deb"
      when: erlang_check_deb.rc == 1

    - name: Install erlang dpk src
      apt:
        deb: "/home/{{ username }}/erlang-solutions_1.0_all.deb"
      when: erlang_check_deb.rc == 1

    - name: Install erlang and elixir
      apt:
        update_cache: yes
        state: present
        pkg:
          - esl-erlang
          - elixir
      when: erlang_check_deb.rc == 1

    - name: Install Hex
      command: mix local.hex --force
      changed_when: >
        "Will always run, don't show that it changed" == 1

  roles:
    - role: jnv.unattended-upgrades
      unattended_origins_patterns:
      - 'origin=Ubuntu,archive=${distro_codename}-security'
      unattended_automatic_reboot: true
      unattended_automatic_reboot_time: '09:00'
      unattended_mail: "{{ admin_email }}"

Most of the steps above are documented in their name, but generally what this Playbook will do for us is update the apt database, and install some packages, including Erlang and Elixir. Some of those packages are optional, so you should examine the list and modify it according to your needs.

To make it work we need to add a variable to our ansible/inventories/group_vars/all/all.yml file that we defined earlier, specifically the admin_email variable:

---
username: deploy
app_name: example
domain: example.com
admin_email: admin@example.com

If you are going to commit this to a public repository, you may not want to expose your email in clear text. In a future step we will look at how we can utilize Ansible Vault to store variables such as this one in an encrypted file.

We will also need to install the Galaxy Role we referenced at the bottom of the file jnv.unattended-upgrades. In your Ansible folder run:

$ ansible-galaxy install jnv.unattended-upgrades

As always when using a dependency of this nature, skim through the code to give yourself confidence that the code isn't doing something unsafe.

This role will install the UnattendedUpgrades package which will keep your server up to date with security updates.

Notice that we have enabled unattended_automatic_reboot which will reboot the machine at 9am UTC if any of the security updates installed requires a restart. If automated restarts are not acceptable in your case, you may want to remove the two related configurations, and restart manually.

Let's add the Playbook to our ansible/main.yml file:

---
- hosts: application

  remote_user: root

- name: Configure Machine Login
  import_playbook: playbooks/configure-login.yml

- name: Install Packages
  import_playbook: playbooks/install-packages.yml

Now re-run Ansible:

$ ansible-playbook main.yml

You'll notice that the steps we did in the configure-login.yml Playbook have not been re-run again since Ansible is idempotent and is able to determine those changes have already been done.

Setting Up Application Deployment Considerations

In this section we will prepare the server for deployment using eDeliver and Distillery (which we will cover in the next post).

We'll start by creating a new Playbook in ansible/playbooks/application-deployment-setup.yml:

---
- hosts: application

  remote_user: root

  tasks:
    - name: Create .env file
      template:
        src: "{{ app_name }}.env"
        dest: "/home/{{ username }}/{{ app_name }}.env"
        owner: "{{ username }}"
        group: "{{ username }}"

    - name: Source .env file in user profile
      lineinfile:
        dest: '/home/{{ username }}/.profile'
        regexp: '^\. "$HOME/{{ app_name }}.env"'
        line: '. "$HOME/{{ app_name }}.env"'
        state: present
        backup: yes

    - name: Ensures shared/config dir exists
      file:
        path: "/home/{{ username }}/app_config"
        state: directory
        owner: "{{ username }}"
        group: "{{ username }}"

    - name: Copy prod.secret.exs with owner and permissions
      copy:
        src: ../../config/prod.secret.exs
        dest: "/home/{{ username }}/app_config/prod.secret.exs"
        owner: "{{ username }}"
        group: "{{ username }}"

    - name: Create Systemd Init Script
      template:
        src: "{{ app_name }}.service"
        dest: "/etc/systemd/system/{{ app_name }}.service"

    - name: Enable Systemd service for application
      systemd:
        name: "{{ app_name }}"
        enabled: yes

We need to create a few templates for this Playbook, first the .env file. This is where you'll store environment variables needed by your application during the build process and during runtime ansible/playbooks/templates/example.env:

export SECRET_KEY_BASE='{{ secret_key_base }}'
export ERLANG_COOKIE='{{ erlang_cookie }}'

This file will be copied into your deploy user's home directory, sourced in .profile and in the Systemd service. Next, we'll define the variables used in the template in Ansible Vault:

$ ansible-vault create inventories/group_vars/application/secret/phoenix.yml

This will ask for a password to use when encrypting your secret variables. Once the password was entered, it will open your text editor defined in $EDITOR and allow you to edit this file. The convention is to use the secret_ prefix before encrypted variables:

---
secret_example_secret_key_base: super secret stuff here
secret_example_erlang_cookie: it's best to generate these values using mix phx.gen.secret

When you save and exit your editor, Ansible Vault will encrypt your secret variables, now for discoverability we will refer to them in a regular unencrypted variable file ansible/inventories/group_vars/application/phoenix.yml:

---
secret_key_base: "{{ secret_secret_key_base }}"
erlang_cookie: "{{ secret_erlang_cookie }}"

Lastly, we are going to define a template for the Systemd service that will ensure our application re-spawns if the server is restarted ansible/playbooks/templates/example.service:

[Unit]
Description={{ app_name }}
After=network.target

[Service]
User={{ username }}
Restart=on-failure

Type=forking
Environment=MIX_ENV=prod
EnvironmentFile= "/home/{{ username }}/{{ app_name }}.env"
ExecStart= /home/{{ username }}/app_release/{{ app_name }}/bin/{{ app_name }} start
ExecStop= /home/{{ username }}/app_release/{{ app_name }}/bin/{{ app_name }} stop

[Install]
WantedBy=multi-user.target

Add our new Playbook to the ansible/main.yml file:

---
- hosts: application

  remote_user: root

- name: Configure Machine Login
  import_playbook: playbooks/configure-login.yml

- name: Install Packages
  import_playbook: playbooks/install-packages.yml

- name: Application Deployment Setup
  import_playbook: playbooks/application-deployment-setup.yml

Now since we have encrypted vault secrets we will need to tell Ansible how to decrypt them. There are a few ways of doing that, you can either tell Ansible to ask you to type the password before running:

$ ansible-playbook main.yml --ask-vault-pass

Or, my preferred method, store the password in a plain-text file (that should NEVER be committed to git) and tell Ansible where that file is:

$ ansible-playbook main.yml --vault-password-file .vault-password

Ignore that file in git:

echo '.vault-password' >> .gitignore

To make your life a bit easier, you may want to create a mix alias in the root of your project under mix.exs:

  def project do
    [
      aliases: aliases(),
      # ...
    ]
  end

  defp aliases do
    [
      ansible: &run_ansible/1,
      # ...
    ]
  end

  defp run_ansible(_) do
    Mix.shell().cmd(
      "cd ansible/ && ANSIBLE_FORCE_COLOR=True ansible-playbook main.yml --vault-password-file .vault-password"
    )
  end

Now to re-run the Ansible script you can simply run this command from the root of your project:

$ mix ansible

Setting Up Auto Renewing SSL Certification With Let's Encrypt

To set up an SSL certificate we will once again use a Galaxy Role. This will automate the certificate renewal and take care of verification for us.

First let's install the role from Ansible Galaxy. From inside our ansible/ directory run:

$ ansible-galaxy install geerlingguy.certbot

As before, make sure you skim through the code to ensure it is safe to run.

Next we will create our Playbook in ansible/playbooks/lets-encrypt.yml:

---
- hosts: application
  vars:
    - certbot_auto_renew: true
    - certbot_auto_renew_user: "root"
    - certbot_auto_renew_hour: "3"
    - certbot_auto_renew_minute: "30"
    - certbot_auto_renew_options: "--quiet --no-self-upgrade"
    - certbot_create_if_missing: true
    - certbot_admin_email: "{{ admin_email }}"
    - certbot_create_method: standalone
    - certbot_create_standalone_stop_services:
      - nginx
    - certbot_certs:
      - domains:
          - "{{ domain }}"
          - "www.{{ domain }}"

  remote_user: root

  roles:
    - geerlingguy.certbot

We need to give Certbot an administrator email address, if you remember in a previous step we typed the email in clear text under inventories/group_vars/all.yml. This time we are going to move it into a vault so that it is not exposed. Run this command to create a new vault for all:

$ ansible-vault create inventories/group_vars/all/secret/all.yml

Enter a password for the vault. In the editor that opens write your email address:

---
secret_admin_email: "admin@example.com"

And replace the previous value in the unencrypted inventories/group_vars/all.yml:

admin_email: "{{ secret_admin_email }}"

Make sure all the settings seem reasonable, and run Ansible:

$ mix ansible

Now if your domain is configured correctly and pointing at your server's IP address, you should be able to see two files on the remote machine in /etc/letsencrypt/live/example.com/fullchain.pem and /etc/letsencrypt/live/example.com/privkey.pem. In the next section we will configure Nginx to use those files.

Setting Up Nginx

To serve our application we will use Nginx - this is not a necessary step, but it makes setting up SSL certificates with Let's Encrypt a little easier.

Let's start with the Playbook. Create a file at ansible/playbooks/nginx.yml:

---
- hosts: application

  remote_user: root

  tasks:
    - name: Remove the default nginx app's config
      file:
        path: /etc/nginx/sites-available/default
        state: absent

    - name: Remove the default nginx app's symlink if it exists
      file:
        path: /etc/nginx/sites-enabled/default
        state: absent

    - name: Copy nginx.conf
      template:
        src: nginx.conf
        dest: /etc/nginx/nginx.conf

    - name: Ensure Nginx Modules dir exists
      file:
        path: /etc/nginx/modules
        state: directory

    - name: Nginx SSL Shared Settings Module
      template:
        src: "{{ app_name }}_shared_ssl_settings"
        dest: /etc/nginx/modules/{{ app_name }}_shared_ssl_settings

    - name: Configure nginx for the app
      template:
        src: "{{ app_name }}.nginx"
        dest: "/etc/nginx/sites-available/{{ app_name }}"
        group: "{{ username }}"
        owner: "{{ username }}"
        force: yes

    - name: Enable the app
      file:
        src: "/etc/nginx/sites-available/{{ app_name }}"
        dest: "/etc/nginx/sites-enabled/{{ app_name }}"
        state: link
        owner: "{{ username }}"
        group: "{{ username }}"

    - name: Restart nginx
      service:
        name: nginx
        state: restarted
      changed_when: >
        "Will always run, don't show that it changed" == 1

In this Playbook we are deleting some of the defaults that Nginx came with, and replacing them with another set of defaults. You will need to create the following template ansible/playbooks/templates/nginx.conf:

env PATH;
user www-data;
worker_processes 1;
pid /var/run/nginx.pid;

events {
  worker_connections 1024;
  multi_accept on;
}

http {
  # Basic Settings

  sendfile on;
  tcp_nopush on;
  tcp_nodelay on;
  keepalive_timeout 65;
  types_hash_max_size 2048;
  server_tokens off;

  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  # Cache

  open_file_cache max=1000 inactive=20s;
  open_file_cache_valid 30s;
  open_file_cache_min_uses 5; open_file_cache_errors off;

  # Logging Settings

  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  # Gzip Settings

  gzip on;
  gzip_types       application/json;

  include /etc/nginx/conf.d/*.conf;
  include /etc/nginx/sites-enabled/*;
}

This is not too interesting so we won't go through it. The only relevant line here is the one that includes all files in /etc/nginx/sites-enabled/. Our Playbook creates a file in /etc/nginx/sites-available/ and symlinks it to sites-enabled. Let's create the template for this file in ansible/playbooks/templates/example.nginx:

upstream {{app_name}} {
  server 127.0.0.1:4000;
}

map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}

server {
  listen 80 default_server;
  listen [::]:80 default_server ipv6only=on;
  server_name {{domain}} www.{{domain}};
  return 301 https://{{domain}}$request_uri;
}

server {
  server_name www.{{domain}};

  include modules/{{ app_name }}_shared_ssl_settings;

  return 301 https://{{domain}}$request_uri;
}

server {
  server_name {{domain}} www.{{domain}};
  root /home/{{username}}/app_release/static;

  include modules/{{ app_name }}_shared_ssl_settings;

  location / {
    proxy_pass       http://{{app_name}};
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_buffering  off;
  }

  location /live {
    proxy_pass         http://{{app_name}}$request_uri;
    proxy_http_version 1.1;
    proxy_set_header   Upgrade $http_upgrade;
    proxy_set_header   Connection "Upgrade";
    proxy_set_header   Host $host;
  }
}

What this configuration does is listen for connections to your bare domain (in our case example.com) on ports 80 and 443. It will permanently redirect http connections to their more secure counterpart (https). As well as redirect www.example.com to the bare domain.

The /live location is set up to support websockets by upgrading the connection, specifically the /live path is used by Phoenix LiveView, which is my favorite new feature in Phoenix 1.5.

You might be wondering where the 443 port and SSL cert path are defined. To allow re-use and to DRY up the configuration, we extracted some SSL settings into a shared Nginx module. This allows us to reuse them in both our www. redirect and main server block. So you'll need to create this template next.

In ansible/playbooks/templates/example_shared_ssl_settings place the following:

listen 443 ssl http2;
listen [::]:443;

ssl_certificate           /etc/letsencrypt/live/{{domain}}/fullchain.pem;
ssl_certificate_key       /etc/letsencrypt/live/{{domain}}/privkey.pem;

# TLS
ssl on;
ssl_session_cache         shared:SSL:20m;
ssl_session_timeout       10m;
ssl_protocols             TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers               ECDH+AESGCM:ECDH+AES256:ECDH+AES128:!DH+3DES:!ADH:!AECDH:!MD5;

# HSTS
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

# Secure Headers
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-Permitted-Cross-Domain-Policies none;

Finally, we'll import the Playbook into our ansible/main.yml:

---
- hosts: application

  remote_user: root

- name: Configure Machine Login
  import_playbook: playbooks/configure-login.yml

- name: Install Packages
  import_playbook: playbooks/install-packages.yml

- name: Application Deployment Setup
  import_playbook: playbooks/application-deployment-setup.yml

- name: Let's Encrypt SSL Setup
  import_playbook: playbooks/lets-encrypt.yml

- name: Setup Nginx
  import_playbook: playbooks/nginx.yml

Now run the script:

$ mix ansible

And we're done with Ansible! Your server is now ready to receive build commands and run releases.

If we re-run mix ansible, you'll notice there were no changes and the process should take less than a minute to complete.

Conclusion

In this post we took an initial step towards automating deployment of Elixir applications. Automating this process may take longer than doing it manually, at least initially. However, in the long run it allows for faster iteration on configuration changes, and it allows us to blow away the machine and spin up another one in seconds. In addition, your Ansible files serve as documentation of what it takes to run your application in production.

In the next post we will setup releases with Distillery and deploy them using eDeliver.

Thanks for reading!

At Hashrocket, we love Elixir, Phoenix and LiveView and have years of experience delivering robust, well-tested Elixir applications to production. Reach out if you need help with your Elixir project!

Photo by Markus Spiske on Unsplash

  • Adobe logo
  • Barnes and noble logo
  • Aetna logo
  • Vanderbilt university logo
  • Ericsson logo

We're proud to have launched hundreds of products for clients such as LensRentals.com, Engine Yard, Verisign, ParkWhiz, and Regions Bank, to name a few.

Let's talk about your project