Walk through why and how I created Gatling- A deployment tool for Phoenix applications.
For the past few months I've been programming in Elixir. From my experience, it offers al the goodies of a functional language with the same amount of joy offered by Ruby's pretty syntax and developer-centric design. I know functional languages have been around for a long time, but the thing that peaked my interest about Elixir was Phoenix. You see, I'm a web developer; I love building web apps. I enjoy the whole process - from the planning and wireframing, to the design, to the HTML, CSS,and Javascript (Holla at your Ember), and finally to the server. As I came across other languages, I typically had two thoughts:
My first thought is that they were ugly. I hear once you really start learning and getting comfortable with a language, you don't notice all the {}
s and ()
s but at first, they're ugly.
Secondly, when I explored other languages, I asked myself - will this enable me to create web apps in a similar development speed and joy as Rails? I never felt I could answer yes to that question until I discovered Phoenix.
After exploring what Phoenix had to offer I decided to give it a fair shake. I started developing my personal sites with a Phoenix back end and Ember front end. So far, it's been a great experience. I enjoy having the power to create flexible web apps that are backed by an API that is small, fast, and maintainable. Plus it's just really fun to write Elixir.
As I moved further along in creating a few Phoenix apps there was a thought that kept growing in the back of my mind; How am I going to deploy these things?
Deploying my Phoenix Apps
The first time I deployed a phoenix app was completely manual. I took the following steps:
- Setup a server on Digital Ocean
- Install Nginx, Elixir, and set up a Git repository
- Push my app up and run
mix phoenix.server
- Configure Nginx to proxy to my app to port 4000
- Create an Init.d script that would run
mix phoenix.server
if my server restarted.
And that was about it. But this felt inconvenient coming from Rails where I could just git push
to Heroku and it would do everything for me.
I initially considered using Heroku but I had a problem with the way Heroku orchestrates a deploy. Here is an oversimplified process for a Heroku/Rails deploy:
- Receive a git push of your Rails application
- Install your gems
- Precompile your assets
- Create a slug of your application
- Launch that slug on a new dyno
- Proxy web traffic to the new dyno
This all seems fine and dandy until those last two steps- Launching your application on different instance, and changing the proxy to direct there doesn't seem to embrace the power of the Erlang VM. Especially when there are tools out there like Exrm that can perform an upgrade of your running application with no downtime (a.k.a hot upgrade).
My desire for a Heroku for Phoenix app lead me to create Gatling which works like the following:
- Create your own Ubuntu 16.04 server on Digital Ocean or Aws
- Install Gatling via
mix archive.install
- Install Nginx
- Use Gatling to create an empty Git repository with a
post-update
git hook
Now, you can git push to this new remote repository. After running the initial deploy command on that application, all subsequent git pushes will automatically create an Exrm release of your app, and perform a hot upgrade that feels just like Heroku, but with more of an Elixir philosophy.
How It's Made
Let's break down what I wanted Gatling to do and how I implemented it in Elixir.
Mix Tasks, File Manipulation and System Commands
When I began working, I performed a manual deploy of a Phoenix app using Exrm and recorded all my steps: (most, if not all, of these steps can also be found in the Exrm and Phoenix documentation:
$ cd path/to/project #project has exrm as a dependency
$ MIX_ENV=prod mix deps.get #download dependencies
$ MIX_ENV=prod mix compile #compile (in prod you have to manually compile)
$ MIX_ENV=prod mix phoenix.digest
$ MIX_ENV=prod mix ecto setup
$ MIX_ENV=prod mix release
$ cp path/to/release path/to/deployment/dir
$ tar -xf path/to/deployment/dir
# Create an nginx.conf file in /etc/nginx/sites-available/<project>
ln -s etc/nginx/sites-enabled/<project> etc/nginx/sites-available/<project>
# Create an init.d file so we can use the generated shell commands from exrm as a service. Put it in /etc/init.d/<project>
$ sudo update-rc.d <project> defaults
$ sudo service nginx reload
$ sudo service <project> start
Once I had the gist of what I wanted to happen, I needed to create the following mix tasks.
mix gatling.load <project>
Create a Git repository with apost-update
git hook so every time I do a git push, it will run a mix task to upgrade the app to the newest version.-
mix gatling.receive <project>
The mix task called by the git hook. It looks at the project and sees if it's currently deployed. If it is, run$ mix gatling.upgrade <project>
. Otherwise don't do anything. mix gatling.deploy <project>
This task is run manually after the first Git push. It performs all the steps outlined above.mix gatling.upgrade <project>
Create an "upgrade" release of your project and perform a hot upgrade on the currently running application.
As you can see in the recored steps, we're really only running a few system commands and moving around a couple files. Much of the heavy lifting is done by mix tasks that already exist within Phoenix and Exrm. What I wanted was a way to execute those mix tasks and stream the output back the to the shell, which would then be streamed to the client that executed the git push. That way the developer can see the progress of the deploy. To accomplish this, I created a small wrapper around Elixir's System.cmd/2
function:
defmodule Gatling.Bash do
def log(message) do
Mix.Shell.IO.info(message)
end
def bash(command, args) do
bash(command, args, [])
end
def bash(command, args, opts) do
options = [stderr_to_stdout: true, into: IO.stream(:stdio, :line)]
|> Keyword.merge(opts)
message = if opts[:cd] do
["$", command | args ] ++ ["(#{opts[:cd]})"]
else
["$", command | args]
end
log Enum.join(message, " ")
System.cmd(command, args, options)
end
end
This works the same as System.cmd
, but by default, it streams the output to :stdio
and logs the passed in command right before it's run. This provides a little more transparency to Gatling as it performs a deploy.
Now any time we call a system command (which is quite often) we'll use this Gatling.Bash.bash/3
instead.
mix gating.load <project>
This task looks for a project in the place where your apps are deployed. If it's not there, it creates a directory and adds a post-update
hook that looks like this:
#!/bin/sh
unset GIT_DIR
exec sudo mix gatling.receive <project>
mix gatling.deploy <project>
This task is the meat and potatoes of Gatling and it's where most of the work gets done.
1 defmodule Mix.Tasks.Gatling.Deploy do
2 use Mix.Task
3 import Gatling.Bash
4
5 def run([project]), do: deploy(project)
6
7 def deploy(project) do
8 Gatling.env(project, port: :find)
9 |> mix_deps_get
10 |> mix_compile
11 |> mix_digest
12 |> mix_release
13 |> make_deploy_dir
14 |> copy_release_to_deploy
15 |> expand_release
16 |> install_nginx_site
17 |> install_init_script
18 |> mix_ecto_setup
19 |> start_service
20 end
21 end
Above, the run
function on line:5
is executed when your run $ mix gatling.deploy
. The full file can be found here
Let's explain what's happening here.
Staring with line:8
, Gatling.env(project)
returns a map of all the information needed to execute a deploy. This includes, paths, file templates, and an available port on the system. This env
map must be returned by every function and, in turn, passed to the next. This may seem similar to how Plug works by passing a Plug.conn
through each plug.
Now lets look at mix_deps_get/1
def mix_deps_get(env) do
bash("mix", ~w[deps.get], cd: env.build_dir)
env
end
We use the parts of env
that we need, do our work, and make sure to return it when we're done. Every function in the pipeline behaves this way.
Advantages of this approach
Using a pipeline allows a lot of flexibility around deploying and testing Gatling. We can:
- Pass in a Mock
env
and and customize our test assertions - Ensure our test suite doesn't muck up our development system by emulating a production ubuntu server within Gatling's
test
directory. - Allow for a slew of callbacks before and after each function. In future versions of Gatling, the client will be able to write their own custom steps in between each default step.
mix gatling.upgrade <project>
The upgrade tasks follows the same pattern. And we even import many of the functions from the deploy task:
1 defmodule Mix.Tasks.Gatling.Upgrade do
2 use Mix.Task
3
4 import Gatling.Bash
5 import Mix.Tasks.Gatling.Deploy
6
7 def run([project]) do
8 upgrade(project)
9 end
10
11 def upgrade(project) do
12 Gatling.env(project)
13 |> mix_deps_get()
14 |> mix_compile()
15 |> mix_digest()
16 |> mix_release()
17 |> make_upgrade_dir()
18 |> copy_release_to_upgrade()
19 |> upgrade_service()
20 end
21 end
mix gatling.receive <project>
The receive task is called in the git post-update
hook.
This is a very small task with nothing but the run
function.
def run([project]) do
if File.exists? deploy_dir(project) do
Mix.Tasks.Gatling.Upgrade.upgrade(project)
end
end
We are looking to see if the project has already been deployed. If it has, call the upgrade task.
I've opted not to automatically start the deploy task on the initial git push since you may need to do some additional setup (like adding your secrets
file) before the initial deploy.
Conclusion
So that's all of Gatling as it currently stands. I'm testing it now in production environments to suss out needs that arise from real world deployment situations. I'm excited to see if and how people find this useful for deployment.
Future plans for Gatling are to use the EXRM's successor called Distillery for creating the releases and to add callbacks to each step in the deployment process. I think after these two additions, Gatling should be able to handle a vast array of deployment strategies and offer Heroku-like features for an even lower cost. Look for farther posts on this topic as Gatling evolves to handle more use cases.