From Sidekiq Installation to ECS Deployment

Koby
5 min readMar 7, 2021

When you are developing software using Rails, you may want to use asynchronous jobs to send emails or split up heavy processing to run in parallel.
Rails has an API for job management called ActiveJob, so there’s no reason not to use it.

For enqueuing and executing jobs in production you need to set up a queuing backend, that is to say you need to decide on a 3rd-party queuing library that Rails should use. Rails itself only provides an in-process queuing system, which only keeps the jobs in RAM. If the process crashes or the machine is reset, then all outstanding jobs are lost with the default async backend. This may be fine for smaller apps or non-critical jobs, but most production apps will need to pick a persistent backend.

As stated in the Rails guide job execution, you’ll need some sort of queuing backend to use this in production.

In such cases, Sidekiq can be used for job management.

However, if you want to implement it, you need not only to install the gem, but also to prepare docker-compose for the development environment and to be able to test the jobs in CI.
In addition, you need to prepare an environment for staging and production execution.
As I had never used Sidekiq before, it was a lot of work for me, so I will write a summary of what I did so that it will be useful for someone who will introduce it next.
The contents are as follows.

  • Basic knowledge of Sidekiq
  • Preparation of development environment (Dockerfile, docker-compose)
  • Setting up CI (CodeBuild)
  • Deploying Sidekiq containers to ECS (staging/production environment)
  • Elasticache

Fundamentals of Sidekiq

In Sidekiq, a job is called a worker, and in ActiveJob, a thread is called a worker, so I was confused at first.
As shown in the figure below, the app container and the sidekiq container enqueue and dequeue via redis, and the sidekiq container executes the job.

Prepare the development environment (Dockerfile, docker-compose)

redis container

version: "3.7"
services:
redis:
image: redis:6
ports:
- 6379:6379
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
command: redis-server /usr/local/etc/redis/redis.conf
sysctls:
- net.core.somaxconn=512

You will need redis to register your jobs.
Copy the redis configuration file to the container.
The redis.conf is as follows.

# The filename where to dump the DB
dbfilename dump.rdb

This is a response to the following error when starting the container.

Redis::CommandError (MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.)

The problem was solved when I specified a file to pack up the redis data.
Make sure that this configuration file is referenced when redis-server is started in command.
Also, set somaxconn to 512 in sysctls. This is to deal with the following error.

# WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.

somaxconn seems to be an abbreviation for socket max connections, and if it is below the default value specified by Redis, a warning appears.

Sidekiq container

version: "3.7"
services:
redis: # omitted
sidekiq:
depends_on:
- db
- redis
build:
context: .
target: test
dockerfile: ./dockerfiles/Dockerfile
command: bundle exec sidekiq -C config/sidekiq.yml
environment:
REDIS_URL_SIDEKIQ: redis://redis:6379/5/sidekiq
DB_HOST: hoge
DB_NAME: hoge
RAILS_ENV: development

I used the same Dockerfile as app to build it.
Since we were doing a multi-stage build, if we used the container as is, we would end up with a container with RAILS_ENV set to production.
In order to prevent this, I used a temporary container image of the test stage as target: test.
The command is started by specifying a configuration file, but config/sidekiq.yml seems to be the default file path, so it seems to be loaded without writing it. However, it seems that config/sidekiq.yml is the default file path, so it will be loaded without writing it.
As an environment variable, specify the connection point for redis. docker-compose allows networking by service name, so I used redis://.
I also passed in the DB connection information because I thought the sidekiq container would retrieve data about jobs from the DB and write jobs to the DB.
Please note that if you don’t add mailers to the queues, the mail will not be sent.

---
:queues:
- default
- mailers

The official configuration example can be found here.

CI settings(CodeBuild)

build:  
commands:
- docker-compose -f docker-compose.yaml build
# Start
- docker-compose -f docker-compose.yaml up -d db redis
- docker-compose -f docker-compose.yaml -f run app bundle exec rspec

We also need to set up CI to test the job.
In buildspec.yml, you need to start redis before running the tests as shown below.
Another option is to use a dummy KVS such as mock_redis from gem.

Deploying a Sidekiq container to ECS

I had the app and nginx containers in the same ECS task, but I did not add the sidekiq container to this task.
Being able to deploy at the same time as app is an advantage. However, if the app container does not start, the task will stop and so will the sidekiq container along with it.
If you make it a separate task, you can avoid this problem, and it is easy to increase or decrease just the sidekiq container. Furthermore, it is more efficient in using server resources.
I used the same Docker image as in app.

I used a tool called hako for deploying to ECS.

I overwrote the command in bundle exec sidekiq -C config/sidekiq.yml when deploying with the hako defined jsonnet file.

app: { 
image: '[aws-account-id].dkr.ecr.ap-northeast-1.amazonaws.com/[repo-name]',
tag: 'latest',
command: [ "bundle", "exec", "sidekiq", "-C", "config/sidekiq.yml" ],
cpu: '256',
memory: '512',
},

Elasticache

I was getting the following warning when starting the Sidekiq container.

WARNING: Your Redis instance will evict Sidekiq data under heavy load.
The ‘noeviction’ maxmemory policy is recommended (current policy: ‘volatile-lru’).

This was a setting for how to behave when redis memory overflows. The documentation of redis describes the behavior of the eviction policies.
The CacheParameterGroupFamily of the redis server I was using was set to redis5.0.
The default maxmemory policy seemed to be volatile-lru.
I changed the maxmemory policy by specifying the following in CloudFormation, and further specified it as a Property in the Properties value.

Resources:
ParameterGroup:
Type: AWS::ElastiCache::ParameterGroup
Properties:
Properties: {
"maxmemory-policy" : !Ref MaxMemoryPolicy
}

I changed the setting as a response to the warning, but in actual operation, I did not need to worry about it so much as long as I monitored the memory status of the redis server.

Originally published at https://zenn.dev on March 6, 2021.

--

--