Let's do something real this time. I started the blog pretty quickly while I am learning about Kubernetes, Containers and its relevant technologies.

In this article, we will take a look at how it’s done, what is the architecture behind it and what are the benefits of using Kubernetes to manage all this. Before all this, let’s see what are the things that I want to do.

What do I need?

I want to start a blog, hence I need are you blogging software. I need this to be simple yet convenient, cheap to setup, and mobile responsive.

Also, I want my audiences to be able to make comments. On the other hand, I am aware of all the spamming that may happen in the comments section. So I have to find a commenting Software that can prevent it. Thus something that requires need authentication and can allow moderation will be good. I also want to be able to know who had comment on my article so that I can respond to it.

All these need to be stable enough so that I don’t need to worry about it.

What have I chosen?

After carefully searching couple a couple of days, I have decided to use the following:

  • Ghost - A very common open-sourced blogging platform, which is very simple to setup, flexible in customisation, have tons of integration options (including Slack to notify you when a post is published), multiple author support. It also provides hosting service. Since I am learning, I have decided to host my own using Kubernetes cluster.
  • Schnack - A commenting platform that is highly recommended in various places like here and here in year 2020. It supports and requires login from Twitter, Google, Mastodon, Facebook, Github and has notification plugins that support Slack. Obviously setting up OAUTH login is very painful and requires verification, but it is worthwhile because it can help reduce spam a lot!

Ghost provides an official docker image while I have built a docker image for Schnack. There is no need to download any of the images, when you setup the Kubernetes yaml, they will be pulled automatically.

What roles does Kubernetes play?

Orchestration - Making sure that Ghost, Schnack are:

  • broughtly up properly,
  • periodically checked and ensured that it is up-and-running properly,
  • provided with proper Persistent Storage to store all its necessary contents, and

Secured in the application level so that the software stack are:

  • Secured through SSL enabled ingress, and
  • have certificates renewed regularly.

Prerequisites and assumptions

  1. A Kubernetes Cluster setup - you can refer to my articles (part 1, part 2, part 3) for a properly setup Kubernetes Cluster
  2. Kubernetes cert-manager, ingress-nginx also setup. I have written another guide for this.
  3. GlusterFS client setup on all nodes that you want Ghost and schnack to run on.
  4. Your own domain registered with 2 entries of DNS registered (either A or CNAME doesn't matter). In the following example, we will use example.com and comments.example.com to host Ghost and Schnack respectively.

Architecture

With all preparations done, let's take a look at the architecture

This is a very simple and typical web site deployment in Kubernetes, which consist fo the following components:

  1. The MetalLB handles load-balancing to the ingress-nginx. In L2 mode
  2. ingress-nginx is the reverse proxy and SSL termination point.
  3. cert-manager periodically renews SSL certificates for ingress-nginx and use it to conduct LetsEncrypt domain validation.
  4. Ghost and schnack are ran as another 2 pods.
  5. Glusterfs service glues Ghost and schnack to the GlusterFS distributed storage, which runs on bare metal.

Since Ghost behaves like a static website, as long as you configure ingress-nginx properly it can load-balance and cache the contents in a very efficient manner.

We will discuss about how to expand this architecture to enterprise level system in another article (which will come).

Let's see how the setup is done one-by-one. All examples could be found in my Github repository.

Glusterfs service setup

In order to allow the external GlusterFS service to work properly in Kubernetes, you will need to setup an endpoint for GlusterFS, which is illustrated in the below manifest:

apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs-cluster
# IP address must the glusterfs cluster
subsets:
- addresses:
  - ip: 192.168.xxx.yyy 
  ports:
  - port: 1
- addresses:
  - ip: 192.168.xxx.zzz
  ports:
  - port: 1
- addresses:
  - ip: 192.168.xxx.aaa
  ports:
  - port: 1
---
apiVersion: v1
kind: Service
metadata:
  name: glusterfs-cluster  # Must be the same as the name of the Endpoints
spec:
  ports:
  - port: 1

Normally you will have multiple endpoints for a GlusterFS cluster, hence put those into the IP addresses in the above yaml and apply it.

Ghost setup

Storage for Ghost

We will configure Ghost to use GlusterFS using the following Persistent Volume and Persistent Volume Claims in the same YAML file to simply things a bit:

# This defines the PV and PVC together, to make the example more simple
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-gl-ghost
spec:
  capacity:
    storage: 1G
  accessModes:
    - ReadWriteMany
# The following example use GlusterFS, you can also use NFS with different
# configurations
  glusterfs:
    endpoints: glusterfs-cluster
    path: "/ghost"
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-gl-ghost
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  volumeName: pv-gl-ghost
  resources:
    requests:
      storage: 1G

Ghost Deployment and Ghost Service

The ghost deployment and service can be deployed with the following YAMLs:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ghost
  labels:
    app: ghost
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ghost
  template:
    metadata:
      labels:
        app: ghost
    spec:
      securityContext:
# The UID and GID does not need to be created in the relevant nodes / pods. You just
# to ensure that the UID/GID has permission to write on the persistent volume
        runAsUser: <uidOfGhostUser>
        runAsGroup: <gidOfGhostUser>
# In this example, we use a GlusterFS volume to provide a 3-way redundency to
# the storage of the contents. 
# The volume name (data below) musht match with the volumeMounts.data field below
# The claim name must be the same as defined by the PVC in example file
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: pvc-gl-ghost
# Pull the docker image for ghost, tagged latest and we always pull a new one to
# ensure we can upgrade properly. If you change to "IfNotPresent" then it will
# only pull when the image does not exist in that node. It will make restart
# faster
      containers:
      - name: ghost
        image: ghost:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 2368
# The URL provided to Ghost configuration
# Note the https - this is to ensure that ghost will do redirection if the URL
# is not https
        env:
        - name: url
          value: https://www.example.com
# In the Ghost docker file, it only requires 1 persistent volume which should
# be mounted to /var/lib/ghost/content path in the image.
        volumeMounts:
        - mountPath: "/var/lib/ghost/content"
          name: data
# The following defines how the control pane consider the deployment ready to serve
# pages
        livenessProbe:
          failureThreshold: 18
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 2368
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 1
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 2368
          timeoutSeconds: 1
        startupProbe:
          failureThreshold: 1
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 2368
          timeoutSeconds: 1
Ghost deployment
apiVersion: v1
kind: Service
metadata:
  name: ghost-service
spec:
  selector:
    app: ghost
  ports:
  - protocol: TCP
    port: 2368
    targetPort: 2368
  type: ClusterIP
Ghost service

In the above Ghost deployment, we point the data storage for Ghost into the Persistent Volume Claim of pvc-gl-ghost. Set it to listen to port 2368 and setup proper monitoring on the service availability. I am using the official Ghost image and will always pull the latest image whenever the deployment is restarted. By doing so, I can upgrade rather easily to the latest version of Ghost.

The volume that is going to be mounted to the Ghost image will need to be write-able by the user specified in the securityContext (Under spec sesction).

Ingress for Ghost

In this example, we will host the ghost in the URL of https://example.com. We can use the following Ingress manifest:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: example-com
  annotations:
    kubernetes.io/ingress.class: "nginx"    
# The following line ensures that nginx will do redirection correctly
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
    nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
    nginx.ingress.kubernetes.io/server-alias: www.example.com
# This line ensures that we use the correct certificate issuer
    cert-manager.io/issuer: "letsencrypt-prod"

spec:
  tls:
  - hosts:
    - example.com
    - www.example.com
    secretName: example-com-tls
  rulesm:
  - host: example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: ghost-service
          servicePort: 2368
Ingress for ghost

Once the above is applied, it will setup ingress-nginx with the right redirection, and cert-manager will initiate a certificate request to LetsEncrypt automatically. You can refer to my article to check whether your certificate is issued and renewed accordingly.

Verifying Ghost

If you setup everything accordingly and correctly, visit the URL that you have defined and the Ghost template website will be setup accordingly. You can start to add authors and post articles.

Schnack Setup

Similar to Ghost, we will need to setup Schnack. The image that I will be using is built by myself as I noticed that there is no suitable pre-built image for the software. Luckily Schnack provided a handy Dockerfile that I could modify and build upon.

I have forked Schnack into my own Github repository so that my changes to the Dockerfile are incorporated. Also want to work with him (her) to incorporate my changes as well but seems like he / she had been very busy.

Anyway, the Docker image is in the Docker repository, so you can take a look at the documentation. yourself. I will focus on the Kubernetes deployment and ingress setup in this article.

Persistent Storage for Schnack

My docker image requires 2 persistent volumes. One for configuration and the other one for the comments database. The relevant manifest are shown below:

# The Schnack docker image that I've defined require 2 persistent
# volume mounts, 1 to store the data and the other one stores
# the configuration. The user ID that is running Schnack must
# have read-write access to the data and read access to the config
# This example defines the data volume
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-schnack-config
spec:
  capacity:
    storage: 10M
  accessModes:
    - ReadOnlyMany
  glusterfs:
    endpoints: glusterfs-cluster
    path: /schnack/config
    readOnly: true
  persistentVolumeReclaimPolicy: Retain
#  nfs:
#    server: node.example.com
#    path: "/exports/comments/config"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-schnack-config
spec:
  accessModes:
    - ReadOnlyMany
  storageClassName: ""
  volumeName: pv-schnack-config
  resources:
    requests:
      storage: 10M
config mount point PV / PVC manifest, please note that I mount it as read-only.
# The Schnack docker image that I've defined require 2 persistent
# volume mounts, 1 to store the data and the other one stores
# the configuration. The user ID that is running Schnack must
# have read-write access to the data and read access to the config
# This example defines the data volume
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-schnack-data
spec:
  capacity:
    storage: 1G
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: glusterfs-cluster
    path: /schnack/data
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
#  nfs:       -- NFS example, assuming you already have an NFS export
#    server: node.example.com
#    path: "/exports/comments/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-schnack-data
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  volumeName: pv-schnack-data
  resources:
    requests:
      storage: 1G
Mount point for data and the PV / PVC definition

Again, I use GlusterFS to define these volumes.

In the config volume, you will need 2 configuration files:

  • schnack.json - This file stores the main configuration of the software. You can visit my Github repository for a example.
  • plugins - This file contains a list of npm install commands to allow you to install the plugins for authentification and notification. If you use the allplugins tag when you pull the docker image, it will contain all supported plugins and this file can be left blank.

Deployment, Services and Ingress for Schnack

The relevant YAML are copied here, and could be found in my Github repository. I have put in relevant comments to make things easier. You can download them and modify them accordingly vit git.

# Deployment of schback image using
# https://hub.docker.com/r/jasworks/schnack
# There are 2 main tags - 
# latest and
# allplugins
apiVersion: apps/v1
kind: Deployment
metadata:
  name: schnack
  labels:
    app: schnack
spec:
  replicas: 1
  selector:
    matchLabels:
      app: schnack
  template:
    metadata:
      labels:
        app: schnack
    spec:
# The schnack instance will be ran as this user. It does not require
# the user ID is being created in the node. You just need to make sure
# the volumeMounts can write to it.
      securityContext:
        runAsUser: 12412
        runAsGroup: 12412
      volumes:
      - name: pv-schnack-config
        persistentVolumeClaim:
          claimName: pvc-schnack-config
      - name: pv-schnack-data
        persistentVolumeClaim:
          claimName: pvc-schnack-data
      containers:
      - name: schnack
        image: jasworks/schnack:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
        env:
# The same user ID above also need to be set here.
        - name: PUID
          value: "12412"
        - name: NODE_ENV
          value: development
        volumeMounts:
        - mountPath: "/usr/src/app/config"
          name: pv-schnack-config
        - mountPath: "/usr/src/app/data"
          name: pv-schnack-data
        startupProbe:
          httpGet:
            path: /
            port: 3000
          failureThreshold: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /
            port: 3000
          failureThreshold: 1 
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /
            port: 3000
          failureThreshold: 1
          periodSeconds: 10
# This specify where the container should be run (on which node / nodes)
#      nodeSelector:
#        usage: server
Deployment YAML
apiVersion: v1
kind: Service
metadata:
  name: schnack-service
spec:
  selector:
    app: schnack
  ports:
  - protocol: TCP
    port: 3000
    targetPort: 3000
  type: ClusterIP
Service YAML
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: comments-example-com
  annotations:
    kubernetes.io/ingress.class: "nginx"    
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
    nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
    cert-manager.io/issuer: "letsencrypt-prod"

spec:
  tls:
  - hosts:
    - comments.example.com
    secretName: comments-example-com-tls
  rules:
  - host: comments.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: schnack-service
          servicePort: 3000
Ingress YAML

Verify the installation

Once you have modified and applied all the manifest files, cert-manager will initiate a certificate request. Again you can verify them if needed. You can also visit the comments website URL to see if it works like below:

After schnack is setup properly, you will see this on the root of the web server.

Making Ghost and Schnack working together

Finally, you will have your blog setup with a supplementary commenting section ready. Now all you need to do is to make them work together.

In Ghost, it is done by modification of the theme templates. In my example, I want all blog posts to have a comment session beneath it. Hence I modified the post.hbs in the themes directory of the Ghost persistent storage by:

sudo mount -t glusterfs glusterNode0:/ghost /mnt

Then, the file will be in /mnt/themes/<themename>/post.hbs. The modifications are like below:

<section class="post-full-comments">
  <div class="comments-go-here"></div>
  <link rel="stylesheet" type="text/css" href="{{asset 'css/schnack-icons.css'}}" />
  <div class="comments-heading">
    <h2>Leave your comments</h2>
  </div>
  <div class="comments-schnack"> <!-- This must match with below -->
   	<script src="https://comments.jasworks.org/embed.js"
      data-schnack-slug="{{slug}}"
      data-schnack-target=".comments-schnack" <!-- This must match with above -->
      data-schnack-partial-sign-in-via="To make a comment, sign in via"
      data-schnack-partial-login-status="Your comments will be posted as @%USER% (<a class='schnack-signout' href='#'>Sign out</a>)"
      data-schnack-partial-or=" or "
      data-schnack-partial-send-comment="Send"
      data-schnack-partial-mute="Mute"
      data-schnack-partial-unmute="Unmute"
      data-schanck-partial-admin-approval="Pending your approval"
      data-schnack-partial-waiting-for-approval="Pending approval">
    </script>
  </div>
</section>
The place where you can insert the scripts for schnack into your Ghost template.

Once you're done with the modification, don't forget to unmount the folder (/mnt) and restart the Ghost deployment by:

kubectl rollout restart deployment ghost

Verifying the integration

You can simply access your website, create a post and publish it. After publishing you will see a section below all post that will require you to login before posting comments.

Comment session example

You can mingle with the CSS so that it fits with your theme.

Wrapping it up

That's all! Once you have everything setup you can start blogging right away. In my website, I have enabled Twitter and Github login for the commenting section. These are the 2 simpliest OAUTH setup that you can get within a short period of time, and the user coverage is already quite wide. Given enough time, you can also enable Google OAUTH to allow your comments to reach out to further audiences. Feel free to leave a comment for any questions!

Readouts