For longer than I’d like to admit, I’ve been trying to study for the Google Cloud Engineer certification. But I’m not getting very far. I work almost exclusively with AWS in my day job, and I find it hard to learn something with having a real project to work on or a real problem to solve. And the free trial is now only 90 days and $300 credit - rather than the free year Amazon offer… which partially explains why this blog is hosted on AWS.
a quick win?
But it turns out there’s a handy tool that I’m going to try to use as a gateway drug into Google Cloud Platform - (GCP) - and that is Google’s Cloud Shell - which I’m using to write this post right now…
CloudShell? Basically it’s a Linux VM with a stack of developer tools that you can “ssh” into within a browser. You get 5 GB of persistent storage and can do all sorts of things with it. There’s also a cloud shell editor - which I’m using to write this post…
History
For reasons I’ve never been a fan of package managers on macOS. Y’know, tools like Fink, Macports and Brew - I’d rather stick to vanilla macOS thanks, rather than having to faff about with a package manager that may cause problems - plus when I find myself rebuilding a test system - none of those tools are going to be available… and if you wanted a Linux system - why not just use a Linux system - or use a virtual machine or something.
I also didn’t want to have to choose one - nor spend time learning how to use it. Perhaps I’m lucky that I’ve not hit any real blockers in my day to day work meaning I needed to have an additional package installed by a third party package manager.
And, I dunno - perhaps I’m just advertising I’m an idiot who doesn’t really understand the power of macOS and its Unix-y heart by compiling all the tools from scratch with a package manager… but for a while as an alternative to macOS package managers I’ve tried to use Docker to run any of my ad-hoc unix-y dependencies… (which means running them natively in a linux container for the most part) but actually who am I kidding, my primary use for Docker is… er… this blog. This blog is built using Jekyll - and I’d sorted a containerised environment so I could preview posts on my local machine before releasing them to the internet. Although as typos and mistakes still make it through anyway I do wonder why I bother…
Another benefit of the containerised environment is that I could use it on multiple Macs, and even my trusty old Linux powered Thinkpad - and it would just work.
And then Docker threw a spanner into the works a bit - as their new licencing model means I can no longer run Docker on my work laptop. Which is fine - it’s a personal blog, I should be writing it on a personal device anyway - but it was nice to have the option to update things from my work laptop too. Although, looking at how often I actually post things… Although that might be because it’s a pain to have to be using a machine with access to all the components required. I keep my content in Gitlab and I still manage to tie myself in knots with content on just two or three machines remaining in sync…
Anyway - I’d been idly wondering about trying to get my blog environment working in Google Cloud Shell. As this moves all the docker components into the cloud - and has the added benefit that I can write blog posts on an iPad too, right?
Just clone and go?
I figured all I’d need to do was to clone my blog code into Cloudshell - and just run docker-compose up
and I’d be able to use the Google Cloud Shell Web Preview to view my blog works in progress… and of course it didn’t work!
Spanner in the works…
The logs complained I was missing a component - a Ruby gem called Webrick
- and half-assed attempts at installing them as part of running the container didn’t work. Disheartening… but eventually I found this blog post with some helpful pointers. Effectively I took a step back and built a container based on Ruby (Jekyll is written in Ruby) - and then installed the jeykll
and webrick
components by adding them to a Gemfile
- and including the Gemfile.lock
as part of the associated Docker container. I’m typing this like I know what I’m doing… but what can I say, I did get things working.
the tl:dr appears to be that a Gemfile defines the additional Ruby packages you want at a high level. Actually installing the packages (or Gems) generates a lockfile - which lists all of the dependencies required by the two packages you’ve defined, as well as setting a required version for each. As far as I can see, my Gemfile.lock
is broadly equivalent to a requirements.txt
file in python. So in that example - he builds a container once, which generates a lock file - then he can add the contents of that lockfile to the docker container - adding all the required components at container build time.
Now we’ve generated our Gemfile.lock - our dockerfile looks like this:
FROM ruby:3.0
WORKDIR /srv/jekyll
COPY Gemfile Gemfile.lock ./
RUN bundle install
VOLUME /srv/jekyll
And we’re done?
Not quite, but we’re close. The previous setup generates a web preview running on localhost… but I don’t have access to localhost on a cloud shell instance - the web preview is right there on the internet. If you’re developing locally - but want to view content on another machine on your (trusted home) network - you can configure the Jekyll server to run on host 0.0.0.0 - either by passing --host 0.0.0.0
as a runtime option so jeykll serve --watch --host 0.0.0.0
or if you want to you can do the same thing by making changes via Jekyll’s _config.xml
- file. Which I’ve apparently managed to do before but have no memory of doing so. Which I guess is a reason to blog about things.
(note - I’ve not spent long enough learning what Jekyll can do - really I just wanted to get a blog up on the internet - chose a template and that was that… and here we are, a blog just about hanging here on the internet.)
Putting it all together
Turns out that the docker container I’d built was ephemeral too :( so having got things working, I came back to my Cloudshell feeling smug about having got all this working - ran docker-compose up
and discovered my docker container was gone. I mean, it’s not hard to bring it back again - it’s just a docker build -t blog .
away (I guess) - but that’s no fun.
And actually - I’d solved this problem before, as it’s perfectly possible to add a dockerfile and build command to docker compose. So - if the container already exists, great! And if it doesn’t - it’ll rebuild it for me based on the dockerfile I’ve created. Win.
version: '2'
services:
jekyll-blog:
build:
context: .
dockerfile: dockerfile
command: jekyll serve --watch
ports:
- 4000:4000
volumes:
- .:/srv/jekyll
So - this builds a jekyll container (+ dependencies) on demand - and then fires up the test server - which I can then use via Google Cloud Shell Web Preview to check out a pre-production version of my site.
This is a) nerdy - but b) actually kind of cool - as I means I have a whole blog environment happily running in the cloud. But it doesn’t prevent me from doing local offline development too.
Interestingly - I’m hitting an error each time I do this - unless I remove a docker config file that ends up in ~/.docker/config.json
- I don’t really understand why… but have just scripted around it - by adding a shell script to remove that file and run docker-compose up
which I can run manually to fire up my web preview when I connect to Cloud shell.
Smart…
I am very excited to have a relatively easy way to work on blog posts from almost any device. But it’s also been a big distraction from the Google Cloud Engineer course… So I should probably get this published, and then go back to my studies!