For many years I used Makefiles as an abstraction layer in my CI/CD tooling.
make allows the same sequence of actions to be run both by the CI pipeline and developers locally. This is great for consistency and provides a fast feedback loop for the team.
Using Makefiles as an abstraction has saved me a tonne of effort when migrating between CI/CD platforms on short notice. It was as easy as grabbing the example config files for the languages I was using, add make build, make test and a few other steps and I was ready to test. When migrating at scale, a script can raise templated pull requests for a list of code repositories.
Makefiles are useful when using one platform to test and another for deployment. For a long time, the easiest and most secure way to deploy to AWS was with CodePipeline, but the best developer experience for running tests was a dedicated CI platform. A Makefile ensured the same steps were used to test changes with both toolsets.
GitHub Actions has caused me to rethink this approach. I’m well aware this is the type of lock in vendors love. Once you go all in on a platform and its associated ecosystem it becomes massively disruptive to migrate elsewhere. While this is true, how often do you migrate CI/CD platforms for live projects? I can recall doing it twice in the last decade or so. How often does your team run a build? In my case many (many, many) times per hour. We should be optimising for the frequent action, rather than the exception.
GitHub Actions provides hundreds of steps that you can drop into your project. Setting up a GPG key for signing commits, authenticating with AWS using OpenID Connect (OIDC) or creating a release are now a few lines in a YAML file rather than a chunk of bash. While old skool sysadmins will always gravitate towards shell scripts, not every dev shares the same level of enthusiasm for bash.
Actions in GitHub’s Marketplace allow teams to focus on shipping code, rather than tinkering with shell scripts and Makefiles.
Like with all third party code, there is supply chain risk. The majority of community Actions are written in node.js. The rest are docker images. In either case, there can be deep dependency trees. CI/CD pipelines are appealing to attackers as they often have elevated access to other systems.
There are some things you can do to reduce the risk of using third party actions.
Start by auditing the code before using it. If it is a container action, look at the Dockerfile too. You should have a reasonable understanding of what the code will do.
Look at the dependencies for the action. How deep is the dependency tree? Bloated dependencies should be a red flag. If you’re unsure of the dependencies, run
npm install in a container and review what gets pulled in.
Unless the action is published by GitHub or a trusted vendor, lock the version reference to a commit hash rather than a branch or tag. While commit hash collisions are theoretically possible, GitHub hash collision detections in place. Changing the contents of a branch or updating the commit referenced by a tag are both just a
git push away.
GitHub Actions YAML files with community actions makes it easier to understand the intention of the workflow. Rather than needing to read every line of code, anyone on the team can see that
uses: crazy-max/ghaction-import-gpg@… sets up the GPG key and references the secrets containing the key and passphrase. It’s a similar situation with
uses: stefanzweifel/git-auto-commit-action@… which pushes the changes back to the repo and so on.
Now that GitHub Actions supports OIDC, it’s possible to ditch AWS CodePipeline and have Actions take care of deployments in a secure manner. This removes the need to maintain a common set of steps shared across both platforms. Instead an application can use a single workflow file that contains a GitHub Actions based pipeline for the build, test and deploy steps.
Leaning heavily on actions isn’t without downsides. A Makefile can serve as documentation for developers. It can guide them on how to work with the project. You need to decide where that documentation now needs to live. If you’re unsure, a new section in README.md file is a good start for documenting the local fast feedback workflow.
Copying and pasting sections of the workflow file to run commands locally is no longer an option. Marketplace steps don’t work that way.
There are benefits for new devs being able to run
make build and have their local environment setup and a build running with a single command. Being able to hit [up arrow] [enter] to run
make test after each small change is very useful. Running
something && another-command && checker -d $(pwd) -out txt … isn’t the same.
pre-commit is a Python framework for running commands before committing code.
pre-commit is great for giving engineers fast feedback. The configuration can be committed so all team members run the same checks.
If you have a large number of steps in your pipeline, you should review your configuration and make sure you need them all. If you need all of those steps, consider using
act for running GitHub Actions locally. While
act is an interesting project, I have never had the need to use it on a real project.
There is a risk that one day GitHub will take the Travis CI path and destroy all user trust in the platform. This would force teams to migrate to a new platform. Let’s cross that bridge when we come to it. I’m sure larger users will find ways to script the migration, like we did when we switched from Travis to GitHub Actions.
ThoughtWorks are recommending Actions in their latest radar. It’s time to board the hype train and reap the benefits of going all in on GitHub Actions.