Building the Ruby on Rails API application
Because the release of a new major Rails version (version 5) is so close (and this new version has rails-api integrated into it which fits my use-case perfectly), I decided to start experimenting with Rails 5 beta 3 already.
I already had Ruby 2.3.0 installed via rbenv and installed latest beta of Rails with commands:
Building a new Rails app
After Rails was installed I created a new Rails application with
As a result you get a minimal Rails application to start building on. All the default configuration (with Rails’ “convention over configuration” that’s not much) is already there.
I started out designing the database model for my application, first on a piece of paper, then with MySQL workbench. Requirements for the application can be found on the project plan and out of those I decided on a model:
Trying to replicate this model into reality, I utilized Rails’ scaffolding features and gave commands like these to create my ActiveRecord models and controllers.
This already gives you a good starting point with basic models, controllers, fixtures and unit tests in place.
Nice ActiveRecord enumerations
I identified several places where I could take use of Rails’ enumerations - where you can save an integer to the database and have Rails translate it to a meaningful word on the fly. So I added ActiveRecord enums to my models:
Doing this early on allows me to write cleaner code later, for example when saving a user I can just do
user.house_type = :apartment and it gets saved to the database as integer 10. And when fetching the values again, this integer 10 gets automatically translated back again. I can just ask for the house type and get an answer in human-readable format:
Running the application locally
After running database migrations with command
bundle exec rake db:create db:migrate the application can be started with the command
bundle exec rails server. That starts the default server (thin) on the default port (3000).
Deployment to production
Rails 5 isn’t available as an official Docker image yet, so I created my own using the official Ruby image as the base, and just installing Rails with
I pushed this as a public repository to Docker Hub so it can easily be used anywhere. Using this as a base, I created a simple Dockerfile for my own application, memoria-api.
Building the Docker image
There are basically two options to build your Docker images, manually or automatically. It’s a good idea to practise the manual build first to get an understanding of what’s going on, but after that I suggest having automatic builds in one way or another.
Building and pushing a Docker image manually
Docker images can be built manually on your own computer that’s running Docker. To build the image just give the command
docker build -t memoria-api .. After this you can do
docker images to see all Docker images with id’s that are fetched or built on your local machine.
Before pushing the image to Docker Hub, you need to create a new repository there. Just create an account and log in on your local machine with command
docker login --email="email@example.com" --username="your_username" --password="your_password" and you’re ready to continue.
Look at your Docker images and identify the one you have just built (well it should be easy, its on the top):
Tag your image with command
docker tag 524f0cb714b3 jannewaren/memoria-api:latest and you’re ready to push:
docker push jannewaren/memoria-api.
Automatic Docker images
You can build your images automatically with any reasonable CI service (for example Gitlab CI) in a small script (that just does the steps described in the previous chapter). But there is also an easier way: have Docker Hub build it for you every time you push code to your git repository.
Instead of creating an image yourself and uploading it, you can link you Docker Hub account with your Github account, and have Docker Hub build a new image every time code is pushed.
To do this you have to first delete your manually built repository from Docker Hub (click on the repository -> Settings -> Delete repository). Then go to Linked Accounts & Services and link your Github account. After that’s done, you can go ahead and find the inconveniently located “Create automated build” link:
You can choose whichever Github repository you have access to, just make sure that repository has a Dockerfile on root level, so Docker Hub knows what to build.
After you have successfully added the repository, you can just make any commit and push it to Github, and watch Docker Hub start building your image immediately.
This way you don’t have to worry about building, tagging and pushing the image. Just make sure you’re only pushing tested and working code into your stable branch (usually master). You can map speficic git branches to specific Docker image tags when creating the automated build on Docker Hub. For example you can build an image tagged “staging” whenever code is pushed to “dev” branch, and build “latest” or “stable” when code is being pushed to master branch.
Deploying with Docker Cloud
Docker Cloud is a new managed service from Docker to build, ship and run dockerized applications on different platforms. Currently it supports provisioning machines on Amazon Web Services, Microsoft Azure, DigitalOcean, SoftLayer and Packet. I will be using DigitalOcean.
It costs $15 per node a month, so I’m not sure if I’ll be using it on the long run, but it’s real easy to start with so I will at least try it out.
I recommend playing around with services, stacks, nodes and repositories with Docker Cloud first, to automatically create your Stackfile. But after everything is done, the Stackfile is the place where all your services, settings and variables are stored and kept safe.
My Stackfile for the Memoria application currently looks like this (with the secrets removed of course):
This creates two services, api and db. db is linked to api (so it’s visible inside the api container on the hostname db), and api’s port 3000 (Rails default port) is exposed to the public via port 80 on the host machine. So the application can be reached going to the host machine’s IP address and port 80. That’s for example http://memoria.jkw.fi/appliances in my case.
The api service is using the image jannewaren/memoria-api:latest and with autoredeploy: true it’s being automatically re-deployed every time a new image is pushed to Docker Hub. In the previous chapter I showed how to link Github into Docker Hub - this all together means that there is absolutely no manual steps or work to be done with deployments. All you have to do is push your code to the Github branch “master”, and everything will be triggered from that push. Really cool!
The db service is using the official Docker image for MariaDB and needs only minimal setup, environment variable MYSQL_ROOT_PASSWORD will set the root password to what you want, and then you just use that same password on the client side, in my case in the api service with environment variable MEMORIA_DB_PASSWORD that’s then used by Rails.
Doing like this is using the MySQL’s root user so this could really use some hardening. On the other hand, this database container will only be used by one service and application, not many.
Running database migrations in production
One problem I had was with database migrations on the Docker Cloud hosted production server. My system with the Stackfile is using an official “empty” MariaDB Docker image, not my own pre-built image, so there’s nothing there. The database or the tables inside simply don’t exist in the beginning. You’ll see an error like “Migrations are pending. To resolve this issue, run: bin/rake db:migrate RAILS_ENV=production” in the logs if you try to access the application without creating the database and tables first.
With Ruby on Rails the database structure and creation is usually managed with the Rails app itself, no need for running any SQL statements manually on the database machine.
Rails’ way of keeping track of the database schema is something called migrations. Normal workflow with migrations is that you do no change them, instead you always create a new migration. In the beginning of the development cycle you of course create some brand new tables, but usually after a while you’ll have the need of changing something you previously did; adding a new column to a table or changing the data types of existing ones.
My migrations look like this:
So there’s one for creating each table. Running the migrations is as simple as giving the command
bin/rake db:create db:migrate, but you also need to specify your Rails environment with RAILS_ENV when giving that command. This determines which environment in database.yml the migrations will be run. So
bundle exec rake db:create db:migrate RAILS_ENV=production works all the way.
With Docker Cloud giving these commands is a bit inconvinient, you have to login to this web-based terminal which has really bad scrolling and copy-paste functions, but it’s doable:
This was the first staus check of my project. Basics are working pretty nicely, but there’s still a lot to be finished.
- Seeding the production database
- User registeration / authentication
- Creating the Tasks for users
- Sending the reminders as e-mails and push notifications
- Polishing for real production use (logging, monitoring, bundle installing “the right way”)
I’m a little bit behind schedule, according to the timetable I’m supposed to be working on iOS and Android the push notifications by now.
I will cover these topics in the next post. Thank you for reading this far!