As part of building an ethically managed company, we at Yetto feel that it's important to share as much knowledge as we can about how we run our business, in the hopes of assisting other bootstrapped startups to avoid the issues which we encountered. To that end, we'd like to talk a little bit about how we build and deploy Yetto, as well as open source a small piece of that infrastructure.
The architecture of Yetto
Yetto is set up as a group of microservices running on a serverless architecture. Because of the way our app behaves, it's easier to maintain and extend it as a collection of small functions, rather than a single application. For example, we have one function to handle inbound email processing, and another to deliver outbound emails.
We don't want to wade into a fierce "monolith vs microservices" debate, but we do want to shed some light onto how our system works and help some other teams along the way. For a small team such as ours, it's just easier to deploy and operate individual functions.
Using Google Cloud Functions
There are several serverless platforms out there, and we use Google Cloud Functions to do what we need to do. There are a few reasons we picked GCF for Yetto:
- Cost: Google is cheap to get started with, and the price scaling made sense for our use case.
- Easy setup: Getting the initial prototype (two functions, one database) up and running was fast. We coded our original idea locally, tested it in CI, and just pushed it online using the
- Development experiment: Both Brian and I have used AWS before. Its interface is clunky, and it requires an incredible amount of network and access configuration before being able to work effectively.
Open-sourcing our functions template
One of the challenges of GCF is that there isn't a lot of documentation available on how to deliver deployments to multiple environments, such as production and staging environments. As well, every new function requires the same process of bundling up dependencies and sending them up to GCF, and we found it important to create a wrapper script that was able to deploy our functions consistently.
To help solve this, we're open sourcing our Google Cloud Functions template repository. This repository contains all the logic necessary to deploy a Cloud Function. You can create a deployment using GitHub's Deployments API, and the included GitHub Action in the repository runs a deployment script to push the code online. With the provided code, you can choose to target a
How we use it
To facilitate this for ourselves, we have Hubot sitting in Slack to listen to our deployments commands. We've also made contributions to the hubot-github-deployments plugin to support necessary functionality of the Deployments API.
Our step-by-step process looks something like this:
- We push a commit to a branch; CircleCI runs our test suite.
- When the branch is green, we deploy our branch to staging in Slack using ChatOps:
. deploy yetto@some-branch to staging.
- Hubot takes over and creates a new deployment via the GitHub API.
- This event triggers a GitHub Action to run
script/deploy, which zips up our function logic and dependencies and sends them to GCF.
- When the branch is deployed, we test it in our staging environment.
- We run through steps 2 through 5, only this time, we deploy to production:
. deploy yetto@some-branch to production.
- After we verify our logic in prod, we merge the PR.
This process, and our scoping of work to single functions, has made deployments easy, fast, and transparent—which means we can do them more often!
Do you find this Google Cloud Functions template useful? Let us know in the repo!