Building This Website - From Zero to Full DevOps

Introduction

A website is an incredibly easy to share resource. As a freelancer, it seemed like an obvious choice to use this format to quickly list my skills, previous experiences and showcase some of my work. Since I wanted the website to be low maintenance and as I'm not a web developer, I quickly turned to some popular WYSIWYG (What you see is what you get) website editor services. Even though these services are widely used and incredibly popular, even for people who are knowledgeable about web development, my first experiences were sub-optimal. I found the interface to be slow, clunky and not intuitive. Additionally, the reliance of an active connection to save your work, meant that as soon as something went wrong in the background, you could lose hours of work if the network failure was not properly communicated. I happened to experience this firsthand and was quite frustrated with the service as a result.

So back to square one, but instead of trying again with the WYSIWYGs, I decided it could be a fun and interesting project to tackle this development as I would any other project at a customer. Documenting my workflow and progress, the document you're reading right now, would be an excellent candidate for a first showcase on said website. Unfortunately, I'm still not a web developer, but this didn't scare me off. I'm still a quick study and with my skills as a DevOps engineer, I had no worries about the actual deployment.

So with the introduction out of the way, allow me to take you on this little journey of building a website from the ground up.

Choosing The Tech

TL;DR

The following technologies will be used for this project:

  • Bootstrap 5
  • npm
  • Google Cloud
  • Podman
  • Docker
  • Gitlab CI
  • Kubernetes
  • Helm
The Website

Let's start by analyzing the website itself, as I have the least amount of experience with web development itself. The end result needs to be clean, functional and needs to look good on both desktop as mobile. Writing html with nothing to support is an immediate no go, the website would honestly look like trash and it would take ages to have anything remotely presentable. As my self defined deadline was roughly a few evenings, things would have to move substantially faster. Of course, I would allow myself more time to expand on the content, but my first public facing deployment, containing a summary of my skills and experiences, had to go up without too much friction. This leaves me with popular frameworks with good community backing, as these are the ones with the most templates, best documentation and plenty of support in case things do go wrong. I thus spent some time browsing through a large collection of templates from various frameworks. This gave me an excellent initial impression of how my website was going to look, as I will likely not be modifying the style too much. Any live preview was also welcome, as it gave me the opportunity to view the work on multiple devices. I finally settled on Bootstrap 5 , as it is hugely popular and easy to work with. The chosen template would be this default resume theme . I found it to be good looking, functional and not bloated with features, which would make editing it more difficult.

Hosting

The second question is about hosting. Since I don't expect a lot of traffic to this website (it is still mostly a tool to share my experience with recruiters), I wanted something with a very low average footprint/cost and dynamic scaling. My first thought was to host it locally on my personal server, but this would require my server to be always available. Using a development environment for your production services is clearly a bad idea. There are multiple hosting services out there that offer full Linux instances (VMs), but this conflicts with my intention to only scale up when someone actually wants to visit the website and thus keep cost down. Going "serverless" would be the way to go. Google's offer " Cloud Run ", seemed the most interesting out of the bunch, promising a way to easily deploy containerized applications on a fully managed serverless platform (with a healthy amount of free requests per month). Finally, to actually put my sources in a container, I chose Podman to build and push the containers to Google's registry. As I'll be pushing the code itself to a personal repository on Gitlab , it makes sense to perform the CI integrated with that service as well. Setting up a Gitlab CI pipeline to build and deploy the container will be straightforward as I'm already quite experienced with the technology. To test the container locally, I can make use of my local Kubernetes cluster, only requiring me to write the initial Helm chart, again something I'm very familiar with.

Local Development

After an evening of deciding on which technologies I was going to use, local development started. Luckily this was a breeze for this project. The Bootstrap template came equipped with a handful of handy npm scripts to run the site. Additionally, with the use of BrowserSync (that also came pre-bundled) it was incredibly easy to make changes and immediately see the effect live on multiple devices. In again only a single evening I had modified the template's style to use my color palette and had filled in the different sections that will make up my homepage.

Publishing to the Cloud

The first step, before we can publish anything, is to package our website in a format that can be handled by (most) public cloud providers. The standardized format is a (docker) container. But even before we containerize, we need to clean up some of our development scripts. Since we will be putting this container in a production environment, automatically including something like BrowserSync would be a terrible idea. After splitting up the build dependencies in a development and production build, we can start writing the Dockerfile. To help keep the final image clean, I decided to work with a multi-stage build. This means that our Dockerfile actually describes two containers, one (temporary) build container and another final one that we will be publishing. The advantage of this approach is that we can reduce the number of layers in our final image.

As the chosen cloud provider was Google Cloud, we first have to create a new account (this was my first time using the platform privately). This process is pretty straightforward and when you first sign up, you even get a bunch of free credit to use on your projects during the first few months. After creating a new Cloud Run project, we can push our container to the registry associated with the project. To achieve this, we must first login with Podman using our gcloud access token. Then we can tag and push the image (I chose the EU registry as this matches my chosen region europe-west1).

gcloud auth print-access-token | podman login -u oauth2accesstoken --password-stdin https://eu.gcr.io
podman build --format docker -t eu.gcr.io/$PROJECT_ID/$IMAGE_NAME:latest .
podman push --remove-signatures eu.gcr.io/$PROJECT_ID/$IMAGE_NAME:latest

In the meantime we can also setup any custom domains. By default, the Google Run projects get a Google assigned URL. This URL is however not very easy to remember and doesn't look professional. Luckily you can configure any set of custom domains, as long as you own them. When setting up a custom domain, you will need to verify ownership by setting up some random TXT record for that domain on your DNS server/provider. After your domain has been verified, it is just a simple matter of pointing the DNS configuration to the correct A and AAAA records.

With the image pushed and the domains configured, we can deploy our first revision. This too goes really smooth as most of the default values are fine for our use case. I do make sure that traffic is directed to the correct port (3000 in my case) and set the minimum number of instances to 0. This means that if no traffic is going to the website, nothing needs to run. The first hit in a while will have to wait a little bit, but the startup times are quick and the website is very responsive afterwards. For expected CPU and Memory capacity I pick 1 vCPU (The minimum) and 512 MiB, which seems sufficient for a small, stateless web server. After hitting deploy, I need to wait just a little bit for the system to process it and I can visit my website. Google Run provides valid TLS certificates automatically, so there isn't even a need for me to configure https myself.

Happy browsing!

CI/CD

A manual deployment is of course only the start. Having to go through the process of building the container, setting up the authentication with Google cloud, pushing the image and finally deploying the new revision is very tedious if I want to make a small modification to the website. The goal here is to, whenever a change is pushed to the main branch, automatically build and deploy the latest changes. To achieve this we will need to setup the required authentication in Google Cloud, as well as provide a simple CI file so the process can run in Gitlab CI. The CI file is the easy part this time. With the following file, you can get immediate results. I did intentionally keep the process simple by not worrying about versioning my releases yet and instead simply always deploying the latest changes to a latest tag. This is sufficient for my needs and if I do need to roll back to an earlier version, then Google Run has me covered as every new deployment, or revision, points to a specific hash of the container used. As such, reverting to a previous revision is enough.

stages:
  - build
  - deploy

build-and-push:
  image: quay.io/podman/stable:v3.2.3
  stage: build
  interruptible: true
  script:
    - cat "$GCP_DEPLOYMENT_KEY" | podman login -u _json_key --password-stdin https://eu.gcr.io
    - podman build --format docker -t eu.gcr.io/$GCP_PROJECT_ID/vvgit-be:latest .
    - if [ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]; then podman push --remove-signatures eu.gcr.io/$GCP_PROJECT_ID/vvgit-be:latest; fi

deploy:
  image: docker.io/google/cloud-sdk:latest
  stage: deploy
  interruptible: true
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
  script:
    - gcloud auth activate-service-account "$GCP_SERVICE_ACCOUNT" --key-file "$GCP_DEPLOYMENT_KEY"
    - gcloud config set project "$GCP_PROJECT_ID"
    - gcloud run deploy vvgit-be-website --image eu.gcr.io/$GCP_PROJECT_ID/vvgit-be:latest --region=europe-west1

In order to verify my development, I do still perform builds of the container, but deployment is saved only for changes pushed on the default branch (which can be considered a release of the website). Note the following variables GCP_SERVICE_ACCOUNT and GCP_DEPLOYMENT_KEY. Getting these right took some effort. The Google Cloud dashboard can be quite intimidating as there is a lot of services and options. Even though I'm familiar with Kubernetes concepts like a service account, knowing which roles are required to get the job done turned out to be quite challenging as documentation on this is not easy to find. In the end, I did manage to get a service account ready with sufficient permissions to the storage bucket containing the container images for my project, as well as sufficient permissions to deploy new revisions to Cloud Run. To replicate what I did, you'll need a service account that will act as a deployment agent for the images and a service account (can be the same) to deploy the revisions. When you create the service account, be sure to also download a JSON key file for that service account. This file will be used to authenticate the CI as the newly created service account ("GCP_DEPLOYMENT_KEY"). To push an image to the registry you will need the following roles configured for your service account on the Cloud Storage artifacts store of your project (it will look something like eu.artifacts.$PROJECT_ID.appspot.com):

Storage Legacy Bucket Writer  | Push images to and pull images from an existing registry host in a project
Storage Admin                 | Add registry hosts to Google Cloud projects and create associated storage buckets

To deploy a new revision, your service account will need the following role set on your Cloud Run service:

Cloud Run Admin               | Grants services create and update permissions

Finally, the runtime service account configured for your service will need the "IAM Service Account User" role to "act as" the service account performing the deployments. This final one was by far the more difficult one to grasp. Once you have all the permissions set and configured the CI/CD variables on Gitlab, then you can apply the given Gitlab CI pipeline to automatically deploy your website to Google Cloud Run.