I recently released my latest side project Grapiture, an API for sending charts (using Chart.JS) or panels to Slack via a Slack webhook. In this post I will give an overview of the architecture behind the project.
The 8 containers:
- Two NGINX containers:
- Reverse proxy
- Internal gRPC load balancer
- Rails web application container.
- Go HTTP API server container.
- Go request validation service container.
- NodeJS Chart/Panel generation service container.
- Redis container.
- MySQL container.
Now that we know what the eight containers are, let's look at each one individually to understand what function they provide and how they fit together.
NGINX reverse proxy
This it the first service that gets hit by an incoming client request, and is acting as a standard reverse proxy splitting requests based on whether they are being directed towards the API (/api) endpoint or any other URL endpoint. Requests going to API endpoint get forwarded to the Go HTTP API server whilst all other requests get directed to the rails web app.
The reverse proxy is also setup to handle rate limiting using NGINX's
Rails web application
The Rails web application handles everything you see when visiting the site, including for the pro tier users sign up, sign in, Stripe subscription billing and API key generation.
Under the free tier a request to post a chart/panel to Slack would not touch the Rails application whereas under the pro tier the Rails application is used for tying requests to user accounts. Pro tier user API keys are however JWT tokens which allows the API service to validate and service a request without the need for the Rails application in the event that the services cannot communicate with each other.
The Rails web application stores its data in a MySQL database instance running in a container with a mounted storage volume.
Go HTTP API server
Here we handle all requests to the /api endpoint, a connection to a Redis container supports the caching of chart/panel data, some basic checks are performed to ensure the provided webhook is in the correct format and that the requesting user is not exceeding there usage credits. For certain requests the request is validated against the validation service via a gRPC call before it can be handled any further. When a request to actually render the chart/panel image is received the chart/panel data is fetched from Redis and a gRPC call with this data is made to the generation service, upon response from the generation service an image is either returned or error handling occurs to wrap any errors before returning a response.
Go request validation service
The request validation service runs as both a task scheduler pulling in new configuration on a daily basis and as a gRPC server handling calls to perform validation. When a validation request is received it performs some checks against the request headers and body before returning a "good" or "bad" response to the API server.
In a future post I may write about the types of validation being performed and the configuration being loaded, stay tuned.
NodeJS generation service
Here we are actually turning the chart/panel request data into an image, the Chart.JS chart data or panel data is rendered onto a HTML canvas element using jsdom which is then turned into a png and returned to the gRPC call.
The generation service runs multiple replicas mainly to ensure that if bad data does crash this service it will not stop other requests from preceding. All of the containers that are executing custom code (e.g the non database / NGINX services) are configured to crash and restart when they encounter an unexpected error. The generation service is the most likely service to encounter such an error given it is handling the request data for generating the charts/panels, so placing it behind a NGINX gRPC load balancer and replicating the number of running instances allows for some pre-emptive resilience.
I am not doing anything fancy here, the deployment is simply a docker compose file, backed up with some shell scripts. I am using a monorepo for all of the services and have a
publish.sh script at the root of the repo that loops through each individual service (nested folders) and calls a
publish.sh script on each service, this file runs
docker build and then
The compose file is then run against a Digital Ocean droplet and the containers are deployed, currently I am only using a single droplet but changing the compose and publish scripts would be enough to deploy across multiple servers.
And thats the basic architecture of the project. Thanks for reading.
Apologies if this was a hard read, I haven't written a blog post for quite some time so may take me a few posts to get back into it.
If you have any questions feel free to ask me on twitter @harveytoro