The first step when publishing a new website is of course deciding the stack to use.

And this being the first personal website in 24 years of career, it’s a bit a uncanny step to focus solely on my needs when building a new website.

After 12+ years working exclusively with Django, I decided to try something different, with a few goals in mind:

  • content focused: I want to focus on writing content, not on testing new things in a playground or on tinkering with the technological stack, otherwise I would invest too much time playing with the website, instead of sharing ideas and findings;
  • fast: I want something I could setup fast (i.e.: something I could spend little time adding a template to) and that would be fast for the users to navigate;
  • offline: I want something I can work on while offline, I want to work on the content from the laptop or from the tablet with ease, and keep everything in sync effortless;
  • no server infrastructure: I keep servers running for a living, and I don’t want another box running to take care of;
  • self-hosted: very early in my career I decided that I should have direct control of my tools, running on opensource software to the maximum extend possible; this website should not be an exception.

Why Hugo

The boxes above can be ticked by almost any static site generator, so I tested a few of the “obvious” options (Gatsby, Jekyll, Hugo and Pelican), plus doing some research for other options.

I deliberately decided for a completely unscientific decision process, to reduce the time spent doing things other than writing; all of the above were certainly capable for my very basic needs.

I quickly ruled out Gatsby and Pelican: the former because dealing with JavaScript isn’t my idea of “fast setup” (nothing against the language or the stack per se, it’s just not my cake), the former because being Python, the risk of spending time tinkering with it would be too high.

Jekyll and Hugo, instead, use reasonably obscure (to me) languages to keep the tinker barrier high, and let me focus on the content.

I tested both for the above goals:

  • content focused: I certainly would not be able to write go or ruby enough to feel the urge to do anything than writing the content;
  • fast: I do not have big design requirements, and they both have a lot of themes to choose from, and as static sites they are both fast for the user;
  • offline: with both I can keep the content on git and work on it from everywhere. I am writing this on a ferry, and it will end up in a merge request I could merge anytime I will be satisfied with the content;
  • no infrastructure: (almost), they can be deployed on a bucket, with no maintenance from my side;
  • self-hosted: again, almost; this is in conflict with the goal above, but the idea of not having to setup a server to serve a few static files is too attractive; I can just keep the repository and the deployment pipeline on my gitlab instance and declared the self-imposed goal fulfilled;

In the end a coin flip decided between Hugo and Jekyll, former won.

I told it was an unscientific process, right?

This is a very personal approach to the selection of a static site generator; my requirements are pretty opinionated and they don’t easily match other people ones

The stack

As a static site, you can get away without any kind of stack: you can compile the files locally and transfer them by ftp or ssh to a web server configured to serve them.

But this would make for a very brief (and even less interesting) post.

Repository and workflow

I set up my first self-hosted gitlab instance in 2013 IIRC, and almost everything I did since then has been hosted there.

It’s been completely automatic for me to use the same infrastructure to handle everything related to this site.

And it’s not just the “dumb” git repository, I can make full use of the integrated tools, mainly the CI (more in this later), the usual merge request workflow I use when doing development work in a team.

Why using merge request for a personal project?

One reason is habit: I am so used to it, that it feels completely natural to me.

A more practical reason is that it keeps me more focused on a single post. While with draft flag on posts I could easily keep published and work in progress posts on the master branch, and work on them without creating different branches, this would keep all the wip posts available in the same place, which would make too easy to jump from one to another aimlessly, which is the opposite I wanted to achieve.

And by using Gitlab’s review apps I can still check the final appearance of the draft posts.

Editing

Having the content on a git repository, I can use whatever tool I have at hand.

A fancy IDE or a basic text editor on the laptop, even the gitlab integrated web idea is an capable tool for editing posts (It even works reasonably ok for writing simpler code patches, if you are asking).

But I actually plan to mostly write on a tablet. Its setup is far from stable, but I guess this is a topic for a different post.

Enough to say, using a git repository to store posts allowed me a much more flexible way of writing than any other system.

And made me much easier to concentrate on the content rather than the UX around it (welcome super-late to the static site generators party, yakky).

The pipeline

But what happens when I push something on the repository?

Enters the CI.

I use the merge requests CI to render the post and check its appearance, legibility and formatting before merging.

Building

Example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
build:
  stage: build
  image: registry.gitlab.com/pages/hugo:latest
  variables:
    - URL: "https://<CONTAINER-URL>/${CI_BUILD_REF_NAME}/"
  script:
    - hugo -b ${URL} -D -d ${CI_BUILD_REF_NAME}
  artifacts:
    paths:
      - ${CI_BUILD_REF_NAME}
  except:
    - master

The build just call Hugo to compile the sources in a directory named after the current branch, which is then saved as artifact to be used by later jobs. Hugo requires the final URL to be defined at build time, so we provide it by combining the container url with the current branch name, to allow multiple branches to coexist on the same container.

Reviewing

But the best reason to use merge requests to manage my posts are the review apps, which deploy the build to a temporary URL where I can check the post as a fully deployed instance, this makes proofreading and content validation much easier.

As this is just a bunch of static files, implementing review apps it’s all about copying the files to some bucket (or container as I am using OpenStack based infrastructure here).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
review apps:
  stage: deploy
  image: xxx/cloud-cli:latest
  variables:
    - URL: "https://<CONTAINER-URL>/${CI_BUILD_REF_NAME}/"
  needs:
    - job: build
      artifacts: true
  environment:
    name: review/${CI_BUILD_REF_NAME}
    url: $DYNAMIC_ENVIRONMENT_URL
  script:
    - swift upload ${BUCKET_NAME} ${CI_BUILD_REF_NAME}
    - echo "DYNAMIC_ENVIRONMENT_URL=${URL}index.html" >> deploy.env
  artifacts:
    reports:
      dotenv: deploy.env
  except:
    - master

The gist is instructing gitlab CI to pull the artefacts from the previous job, upload the file via swift OpenStack CLI tool (which involves setting a disturbing high number of environment variables in the CI configuration), the rest being the review apps configuration itself, including the rather weird way to pass the dynamic url to the review apps subsystem by using a temporary file.

To avoid installing OpenStack tools over and over during deployment I prepared custom image with everything needed to just run the upload command during the pipeline.

Final deployment

The site is currently hosted on Netlify, so the final deployment is a bit different as I don’t want to install yet another command line tool and it was an overkill to add it to the image used for OpenStack.

Luckily it’s quite easy to interact with the Netlify API and all it’s needed is to POST a zip file to an endpoint (https://api.netlify.com/api/v1/sites/$NETLIFY_SUBDOMAIN.netlify.com/deploys), so I just added a couple more variables to the CI and I replaced the swift upload command in the review apps jobs with the call to this bash script:

1
2
3
4
5
6
7
8
#!/bin/bash

zip -r website.zip public

curl -H "Content-Type: application/zip" \
      -H "Authorization: Bearer $NETLIFY_ACCESS_TOKEN" \
      --data-binary "@website.zip" \
      https://api.netlify.com/api/v1/sites/$NETLIFY_SUBDOMAIN.netlify.com/deploys

My experience so far

This is my second attempt at having a place where posting longer content (the first one being something that’s long lost with nobody missing it). I tried to remove every distraction factor I found in the past, as I am more used to write code than words.

On the technical side I am curious to see how well will fare my non scientific approach to selecting the site builder, but I am relaxed on this point as moving to a different platform would be quite easy, as the basic content format is the same for all of the generator I tested and only minor changes would be required, so any roadblock, will not impair my content writing.