Building and Publishing the Site

This post is fifth (and for now, last) in a series, beginning with Automating Web Site Updates.

We’ve been narrowing the “miracle” step in our solution. This post fills in the remaining gap.

We have a Cloud Function ready to kick off the missing step that will build and deploy the updated site. At a high level, this step will need to:

  1. Fetch a repository with static web content plus source directories that each need to be converted to web pages
  2. Convert each source directory to web pages in desired format
  3. Build new static website structure
  4. Deploy the pages to a Firebase hosting project

We need a place to run our code that uses some high level tools: git, firebase CLI, and anything that converts source to web pages. And we need a file system to build the new static site in. Plus, this process may take longer than a cloud function is allowed to run (or at least longer than the GitHub webhook is willing to wait for a response). Those requirements are why we couldn’t just do these steps in the cloud function that responds to the GitHub webhook. We need something more general purpose than that.

Cloud Run looks like a possible solution. The managed version is a lot like Cloud Functions in that it takes your code, runs it in response to HTTP requests, and only charges for resources while the code is running. But instead of just providing source code in a supported language, you provide Cloud Run with a container. That container could run any supporting software you need, not just the supported language environment of a cloud function. Cloud Run will even build the container for you, from your specifications.

Any negatives to using Cloud Run? There are several for this use case, though they can possibly be worked around:

  • Cloud Run is still in beta, so it is subject to change before becoming final.
  • Containers require maintenance. If a security update is needed for any software, the container needs to be rebuilt with the new versions.
  • The container runs only so long as it is serving a web request, so if the requesting program is only willing to wait a short time (for example, 10 seconds for a GitHub webhook or 10 minutes for a Google Cloud Pub/Sub push subscription) we have to be able to build and deploy our site in that amount of time.
  • In any case, the each run is limited to no more than 15 minutes at this time.
  • The file system size is limited by the memory allocated to the service (no more than 2GB).
  • Invocations of the service can be concurrent, so if you are building a site in the file system you have to be sure concurrent invocations don’t step on each other, and don’t use up all the memory.

Despite these negatives, I find trying to use Cloud Run for the problem to be an intriguing approach. I’m not going to use it here, but I’ll keep thinking out how it can solve problems like this one.

So what is the solution for the current problem? I’m going to go old school, and use a virtual machine for this. In Google terms, I’m going to use a Compute Engine instance. At first look this may seem to go against my goal to use “services that require little or no customization or coding on our part”. And I also said I don’t want to maintain any servers. But the way I’m going to use Compute Engine will not require any coding other than that specifically aimed at our business logic, and won’t need any server maintenance, either.

We will launch a virtual machine with a startup script that will:

  • Install the standard tools we need (git, Firebase CLI, etc.)
  • Fetch the source from a GitHub repository
  • Build the web pages using existing tools specific to our needs
  • Deploy the site to Firebase hosting
  • Destroy itself when done

The first and last steps are key: this virtual machine installs what it needs when run and then deletes itself when it has finished the task. That way we aren’t paying for an idle machine standing by waiting for work to do; we’re only paying for what really use. Further, by creating a new machine for each task and then throwing it away, we don’t need to worry about updates – we always launch and then install the latest versions of the tools we’re using.

For more background on this technique of creating, using, then deleting Compute Engine instances, see this tutorial by Laurie White and me.

The virtual machine’s actions all need to be scripted in advance, so they can run without human intervention. Once the script is in place, we can enhance the cloud function from the last blog post to create a new Compute Engine instance that will run that script. The script needs to end by deleting the instance it runs on.

We aren’t going to build the whole solution here, just give the outline. Here’s what the script will look like:

#!/bin/sh
apt update; apt install -y git
#
# Install other tools, get code from GitHub, run business
# logic to build web site pages, deploy to Firebase hosting
# -- Not included in this post
#
# Instance deletes itself below (see tutorial for details)
export METADATA=metadata.google.internal/computeMetadata/v1/instance
export NAME=$(curl -X GET http://$METADATA/name -H 'Metadata-Flavor: Google')
export ZONE=$(curl -X GET http://$METADATA/zone -H 'Metadata-Flavor: Google')
gcloud --quiet compute instances delete $NAME --zone=$ZONE

If we launch a machine with the startup script above (filled in with all the business logic specific details) it will pull our source content from GitHub, build website pages from that, then deploy it to our Firebase hosting site. Which leaves us with one more question: how do we launch such a machine from our Cloud Function (that unfinished update_the_site() function from the last post)? We use the google-api-python-client library. It’s pretty low-level, but there’s good sample code available you can adapt to do this.

So that’s the pipeline now:

I’m going to put this topic to rest for a while, but there are tips and trips regarding secrets and permissions I’ll probably talk about soon.

Responding to GitHub Updates

This post is fourth in a series, beginning with Automating Web Site Updates.

Most of the picture of the process we need is filled in now. We have to deal with what happens between a GitHub PR being merged and an updated website being deployed on Firebase Hosting. This post is going to just deal with responding to a GitHub PR merge.

We need to know when a PR is merged so we can kick off the rest of the update process. Lucky for us, GitHub has a feature that will tell us that: webhooks. At its core it’s a really simple idea: when an event you care about happens, GitHub will make a web request to a URL of your choosing with information about the event in its body. You just need to provide a web request handler to receive it. So before we set up the webhook, let’s figure out what we will use to receive those requests. We need:

  • to run our own custom code
  • when triggered by an HTTP (actually HTTPS) request
  • containing information about a merged PR
  • without costing much when nothing is happening (which in this case, is probably 99% or more of the time)

That sounds tailor-made for a Cloud Function. We can write code in Go, Node, or Python and say we want it triggered by an HTTPS request. Cloud Functions gives us a URL and runs our code whenever a request is sent to that URL. We don’t pay for anything except time and memory while our code is running, not while it is idle waiting for a notification. The only problem is that functions are limited in what they can do. They can’t run long jobs, they have only a small file system available, and they only provide a few language options. We can’t install other software in them, either. But none of that is a problem because we are not trying to handle the website update in the cloud function, we just need to kick that off when it’s appropriate (a subject for the next blog post).

So we will use the Google Cloud Platform console to create a new HTTP-triggered Python Cloud Function. For now, we’ll leave the default sample code in it; we just want to know the URL for the next step: setting up a GitHub webhook.

Authorized GitHub repository users can set up a GitHub webhook in the repository’s Settings section. There’s a section just for Webhooks, and a button to add a new webhook. After you click that, there are some choices to be made:

  • The Payload URL is the address that GitHub will send the request to. That’s our cloud function’s URL from the step above.
  • The Content type specifies the format of the body of the request GitHub will send. The default is application/w-www-form-urlencoded, which is what a web page might send when a user submits a form. Since we want to get a possibly complicated data structure from GitHub, the second option, application/json, is a better choice for us.
  • The Secret is a string (shared between GitHub and your receiving application) that GitHub will use to create a signature for each web request. This is a non-standard way to check that a request really comes from the GitHub webhook you created, and not somewhere else. I created a long random password with a password manager for this.
  • We finally reach the question “Which events would you like to trigger this webhook?” We can choose “just the push event”, but we don’t much care about pushes, we want to know about merges. The second option, “send me everything,” would certainly include the merge events, but we don’t want to be bothered about the vast majority of events we’d be told about then. So we can say “Let me select individual events” and just hear about Pull Requests. That choice still includes a lot of events we don’t care about (creating PRs, closing them unmerged, labeling them, and so on) but it seems to be the narrowest choice that includes PR merges.
  • And we’re going to want to make this webhook Active.

When we click the Add Webhook button, GitHub will send a test request to the URL to see if there’s something at that address accepting incoming data. That should pass, since we already created a cloud function there, but if not, that’s okay for now. We need it to work once the Cloud Function is finished.

Now that we have a webhook, every action on a PR on this repository will cause GitHub to send a JSON object to our cloud function. We need to verify that this request describes a PR merge, since we will get notification of all sorts of other PR events, too. And we need to make sure that this information really comes from our GitHub webhook, and not somebody trying to fool us into thinking a merge happened. Here’s an outline of what we need to do:

  • Load the JSON data in the request body into a Python object
    notification = request.get_json()
  • Check to see that this is a PR that is closed by a merge; if not, just exit, nothing to do here
    if notification.get('action') != 'closed':
    return 'OK', 200
    else:
    if notification.get('pull_request') is None:
    return 'OK', 200
    else:
    if not notification['pull_request'].get('merged'):
    return 'OK', 200
  • Check that the signature is valid; if not, just return a Forbidden response and exit, we aren’t going to deal with fake requests
    import hashlib, hmac, os
    secret = os.environ.get('SECRET', 'Missing!').encode()
    signature = request.headers.get('X-Hub-Signature')
    body = request.get_data()
    calc_sig = hmac.new(secret, body, hashlib.sha1)
    if signature != 'sha1={}'.format(calc_sig).hexdigest():
    return 'Forbidden', 403
  • Kick off the next step that will actually publish an updated website based on the contents of the repository
    update_the_site()

Notice that the secret for the signature, which was provided to GitHub when creating the webhook, is fetched from an environment variable. That returns a string, and it needs to be converted to bytes in order to send to the hash function. You can set up the necessary environment variable when creating or redeploying a cloud function. This is a better option than keeping the secret in the source code itself, which might be available to others in a source repository at some point.

Which leaves us with one big piece left to build, update_the_site(). That will be covered in the next post. Spoiler alert: the cloud function won’t be doing the update, it will just kick off some other tool to handle that.

So, our update process picture is nearly complete:

Jumping to the Other End

This post is third in a series, beginning with Automating Web Site Updates.

Readers are going to use web browsers to look at content, so we need some kind of web server to deliver the final formatted web pages. It’s static content, so there are lots of choices:

  1. A virtual machine running web server software like Apache or NGINX
  2. A web hosting service
  3. Cloud storage set for public access
  4. Serverless platforms

Option 1 is right out. I do not want to configure, manage, patch, and monitor a server. The second option might be okay, but most of them aren’t amenable to full automation. The third choice could be okay, but the cloud storage I’d prefer to use (Google Cloud Storage) doesn’t offer the ability to use HTTPS on a custom domain. That leaves a serverless solution.

Which serverless solution? I’m going to stick with Google Cloud Platform, but other cloud providers offer many similar services. GCP’s serverless offerings include Cloud Functions, Cloud Run, App Engine, and Firebase. I’d have to write code to respond to web requests for Cloud Functions or Cloud Run, so they’re out. Both App Engine and Firebase can serve static web pages without my writing any code, so they’re still looking good.

We want static web page hosting, via HTTPS, on a custom domain, and we want it cheap. We’d rather have it free (hey, is there an option to have them pay us?). Well, both App Engine and Firebase Hosting have free tiers available. So how to choose? I’ve used them both and they’d both work for this. We have to pick one, and I found Firebase Hosting to be easy, scalable, and affordable.

The solution will use Firebase Hosting for the last step.

We will need to build a static copy of the desired website, and use Firebase tools to deploy it to the service. Other than that, we don’t need to do anything to have the pages served in a scalable and reliable manner.

The picture is beginning to be filled in:

Contributors to GitHub to ? to Firebase Hosting to Readers

The unknown center is shrinking. Next time we will jump back to the other side of that unknown and expand on what we need GitHub to do.