This page looks best with JavaScript enabled

Introducing my Homelab and services running within

 ·   ·  ☕ 24 min read

Introduction

It’s been quite a while since I started homelabbing explicitly. I have already posted about Monitoring My HomeLab With Prometheus and Grafana, but I haven’t posted any video on my channel yet.

So today I want to take some time to talk about what I’m already up to. And for this, I’m going to divided this post into the following sections:

  1. How I got started and the machines I had
  2. What services have I tried out and what I have running
  3. Future plans

How it all started

This started actually at work when I was talking to my senior about chroot. The topic extended and we were talking about raspberry now. After the conversation, I was provoked to take back out my raspberry which was kept in my closet for the last couple of months.

The raspberry was kept in there in the first place because it was not really getting used after I did some initial experimentation with it. I played with some sensors and basic stuff before keeping it in there. I bought it for IoT and even though it takes really low energy, I thought keeping it shutdown was a better option because I was not really using it.

But this time, along the same timeline, I was drawn towards r/homelab and r/selfhosted on Reddit. It was a concept that got me excited. That was the thing I was looking for. An enterprise-like test environment where there will be multiple servers with services running on them. And I don’t have to pay for it because I’m not running it on the cloud, which I used to do previously and was bound to experiment in the limits of my finances.

Another factor is my data won’t live on any other entity’s machine, and will not be prone to auditing, theft et cetra.

With raspi already back in service, I was also building a PC. With the new PC as my primary workstation. I can make my existing laptop a node of my cluster. So that I will have 2 devices that can run 24×7.

So in total, I have 3 machines, two for servers:

  1. Raspberry Pi 4B
  2. HP Gaming Laptop (1060 GPU)
  3. Custom PC (intermediate level)

Setting up the cluster

At first, I didn’t have any concept of cluster. It was just my raspberry pi. The first service I ever hosted at the start of my homelab journey was Planka. Sadly Planka has been replaced by Vikunja in my current setup. Both are alternatives to Trello, and I use them for the Kanban board. It’s a good way to plan pending work I have, and when I plan to do them.

Vikunja use Postgres for data layer. I made one decision at the start of my journey to keep one central database instance across my services. It helps make backups simpler. I might be wrong, but that’s what I did. What is your opinion on this one?

I was using Docker Compose for both of the services I mentioned above. But when I learned about this new thing, it changed the way I was thinking. This thing was Docker Swarm. If you haven’t heard about Docker Swarm, but still know about Kubernetes, they both are somewhat similar in what they do. Kubernetes provides more flexibility and thus is best suited for enterprise environments. But Docker Swarm is very simple to set up. And in my opinion enough for a homelab environment. I know home labs are all about learning, and I would definitely set up a Kubernetes cluster in the future. But at the time of writing this, I’m happy with Swarm.

But how did Docker Swarm change the way I was thinking?

  1. It gives the ability to add more node to the cluster. So if I add my laptop to the cluster, I can run two instances of services, one on raspi and the other on the laptop to achieve resiliency. Good learning ground for load balancing and reverse proxy.
  2. No matter which node the service is running on, every node exposes the port if the ports are exposed from the container and mapped host. This made the reverse proxy easier. I could now only point to the master node no matter which host is running the service. I’ll talk about reverse proxy in later stages.
  3. I can pin services to a specific node. Use case? I like my Postgres to stay on one node, all data stays on one node of the cluster. Of course, I’m going to do something for distributed storage in the cluster. But in a later post.
  4. Docker Swarm can be coupled with a UI such as Portainer. Portainer is like a missing component for Docker. It basically shows every aspect of Docker in a UI. You can see containers, volumes, secrets, and images all in one place. Overall it provides better visibility of the Docker system. There are other projects which facilitate the same; example would be Swarmpit and Docker Desktop. Though I have not used any of them very extensively.

One best thing about Docker Swarm for someone who is starting with a HomeLab journey is the normal docker-compose.yaml file just works out of the box. To add more control, you can add swarm specific properties to the compose files. I will talk about it in the next sections.

How to set up the Docker Swarm Cluster?

Well, once again it will be best to consult documentation for Swarm. But the installation is very basic. There will be 2 kinds of nodes. manager and worker. The manager also works as a worker, meaning that it will schedule deployment on itself when scheduling. Going ahead, make sure that all the hosts you are planning to run inside the swarm cluster have docker already installed. For this I would highly recommend official docker installation, and not from the package managers. E.g. docker.io on Ubuntu is way behind in terms of version.

Finally, on the manager node, you’re going to run this command:

docker swarm init --advertise-addr <MANAGER-IP>

You will have to use the actual IP of the manager node right there. If you have the ip command available on your system, you can use the following oneliner to get your internal IP:

ip route get 1 | awk '{print $(NF-2);exit}'

After you have run that, you’ll be able to see output from the command docker node ls (this command only runs on master node). Output could be something like this:

$ docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
lkm62ip2tdiz7dpx8s3qoqfst *   pioneer    Ready     Active         Leader           27.0.3

On this manager node, you’re going to input a command that will give a command to run on worker nodes. The command is:

docker swarm join-token worker

Input the command generated by above command in the worker node and you are done. Unless you have a firewall enabled on either of the machines. You need to find out what firewall you are using and allow the ports used by Swarm. Once again, you can find them in the Docker Swarm setup documentation.

You can make as many node node as manager/leader. In fact, you can add only manager nodes, or promote existing one to manager. Community suggests having even number of master nodes to give better high availability and consensus for scheduling.

Services outside Swarm

We have set up our orchestration with docker, now let’s see what I have in my arsenal.

Most of my services are inside the docker swarm. But there are a few things I wanted to keep out of it. It’s not that they can’t be dockerized. It was just my initial setup and I never bothered modifying it.

One of the things that motivates me to keep out these services of docker subsystem is… well, it depends on docker. These things are kinda more primitive and other hosts are dependent on it.

Maybe someday something will motivate me to dockerize it. But for now, it’s the way it is.

Pi-hole - ad blocking and custom domain

Pi-hole is one of my hero services I consider in my lab. I learned about Pi-hole as a network-wide ad blocker. Before this, I used OpenVPN-based ad blocking. But setting up an EC2 just for the sake of OpenVPN is not a very smart choice for me right now. And that solution was more of a VPN service than an adblocker. What I was looking for was a solution that is enabled across my home network. And here it was, Pi-hole.

As the name hinted, I initially thought Pi-hole was a Raspberry Pi thing. However, after installation, I learned that it can be installed on any machine. Just that it needs to be running 24x7; and for me, my Raspi was the best candidate for this.

Pi-hole uses DNS-based blocking, meaning when a page requests an advertisement site, the DNS lookup for that site is handled by Pi-hole and is blocked when certain sites are requested. For this entire setup to work, you need to set the custom DNS host in the DHCP setting of your router to the host where you installed the Pi-hole. You most probably know your router better than me, so I leave it on to you to set up.

There are various types of ad-list provided by various vendors. And I would encourage you to try at least a few of them because not all sites are blocked by all of the adlist. On top of that, you can add or remove custom sites as well. Handy when we want to see if your DNS based blocking is enabled. What I mean is, I have a site which I don’t visit much, and I have set it as blocked. When I have to check the adblock status, I just go to that website. If I can access that site, the blocking is not actually work.

But adblocking was not the most interesting part in Pi-hole. Pi-hole can provide a few more functionalities, such as custom sites, and DHCP. I never used the latter one, but I have entries mentioned below that map a domain name to certain IPs/hosts in my network.

192.168.10.10 pioneer.santoshk.dev
192.168.10.35 voyager.santoshk.dev
192.168.10.30 titan.santoshk.dev
192.168.10.10 pihole.santoshk.dev
192.168.10.10 vikunja.santoshk.dev
192.168.10.10 portainer.santoshk.dev
192.168.10.10 vaultwarden.santoshk.dev

The first 3 entries are my different hosts. pioneer, voyager, and titan are my raspi, laptop, and desktop respectively. The rest of them will make more sense in the next section where we talk about reverse proxy.

But for time’s sake, know that it has some relation to the host pioneer. This custom domain feature is my favorite in Pi-hole.

But creating entires here is just one part. It says that when someone in local network accesses that site, they will be pointed to that give IP. But on the the host site, it is not configured to handle the request. And that is what we are going to see in the next section.

Nginx and certbot - reverse proxy and HTTPS on local domain

You will wonder why I have my nginx outside my swarm if something like Nginx Proxy Manager exists. Although Nginx Proxy Manager is very easy to use and pretty automatic. When I tried Nginx Proxy Manager, I found out that only the services inside docker containers can be used with custom domains. And it is not very useful when you have services such as pihole, and in the next section openmediavault outside the containers. I might be wrong here, and let me know if I am mistken because I didn’t go really in-depth.

With nginx on the pioneer host itself, I can even reverse proxy the request to other hosts on my network. And that was the reason I chose to have it on host OS itself.

Going back to the previous section, if we look at the last 5 entries on the list, you’ll see that those domains are tied to one single host. This is because pioneer is my reverse proxy.

My /etc/nginx/nginx.conf is pretty stock, and it sources other configuration files in a modular manner. Here is a chunk of code from the http block:

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;

By convention, you have to keep your domain-specific things in the /etc/nginx/sites-available/ directory. Then create a symbolic link to /etc/nginx/sites-enabled/. I have a file called homelab.santoshk.dev.conf inside sites-available and a symbolic link to sites-enabled.

Here are some selected specific chunks of my homelab site configuration:

Portainer:

server {
  listen 443 ssl;
  server_name portainer.santoshk.dev;

  ssl_certificate /etc/letsencrypt/live/santoshk.dev/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/santoshk.dev/privkey.pem;

  location / {
    proxy_pass https://192.168.10.10:9443;
  }
}

This is one of the simplest forms of nginx config I have on my reverse proxy instance. And it goes like this:

You have a service running at https://192.168.10.10:9443. You want to use a custom subdomain provisioned by Pi-hole. And you want to use HTTPS to access it.

  1. On the listen line we say we are using TLS (although conventionally it’s written ssl) and would listen on port 443 only (this means you need to access the site with https://).
  2. On the server_name line hold the value of the subdomain you want to use. The request coming from different subdomains goes to different server blocks.
  3. The next two ssl_certificate* lines tell nginx the keys to use to enable https on the site. We’ll see in the next subsection how to get certificates itself, but for now, keep this a placeholder if you don’t have anything on that path.
  4. The location / { line starts a new block. This line says that when a request arrives at / (basically portainer.santoshk.dev/), do the following.
  5. In the proxy_pass line, I pass in the IP:PORT combo where the service is exposed in the network.

The best thing about having nginx outside the docker subsystem is that I can proxy traffic to services that are not limited to docker. I can have a VM running on one of my nodes, and I can proxy the traffic to it. My next service is one such service that is out of any docker subsystem.

Pi-hole:

server {
  listen 443 ssl;
  server_name pihole.santoshk.dev;

  ssl_certificate /etc/letsencrypt/live/santoshk.dev/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/santoshk.dev/privkey.pem;
    
  root /var/www/html;
  autoindex off;

  index pihole/index.php index.php index.html index.htm;

  location / {
    expires max;
    try_files $uri $uri/ =404;
  }

  location ~ \.php$ {
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
    fastcgi_pass unix:/run/php/php8.2-fpm.sock;
    fastcgi_param FQDN true;
  }

  location /*.js {
    index pihole/index.js;
  }

  location /admin {
    root /var/www/html;
    index index.php index.html index.htm;
  }

  location ~ /\.ht {
    deny all;
  }
}

In this configuration, the first few lines are the same. Then it diverges to some PHP-specific configuration which I’m not very familiar with. After that, it has more rules specific to different paths on the host. I don’t have in-depth knowledge about the configuration, but I have found them on the Pi-hole documentation site.

You might have to install some prerequisites if you are going to have a similar setup. But I will also let you know that the Pi-hole itself can run on a container in host network mode.

Let’s move to the next service I have in the cluster.

Vikunja:

server {
  listen 80;
  server_name vikunja.santoshk.dev;

  location / {
    proxy_pass http://192.168.10.10:4444;
  }
}

server {
  listen 443 ssl;
  server_name vikunja.santoshk.dev;

  ssl_certificate /etc/letsencrypt/live/santoshk.dev/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/santoshk.dev/privkey.pem;

  location / {
    proxy_pass http://192.168.10.10:4444;
  }
}

Configuration for Vikunja is very similar to what we have seen for Portainer. But in this case, I’m also exposing port 80. Meaning that I’m allowing an insecure connection to the site.

Well, the insecure one is not needed. But I have kept it here to show you different configs we can have in nginx.

Finally, although I have more config, you got an idea. You can always refer to the documentation of the service you are installing. They usually have the best config you can have for the functioning of their service at the best. Next we are going to see how can we enable HTTPS on our domains.

Certbot

I can’t end the nginx section without talking about the certbot, as they both work in conjunction. We are using directives such as ssl_certificate and ssl_certificate_key with a path that does not exist. Let’s see how are they made.

As you can see, I have custom domains inside my local network, e.g. portainer.santoshk.dev. DNS for these domains is being resolved on a local level by Pi-hole. This means you can’t access this website outside your network the same way e.g. from your mobile data/network.

The thing with certbot (Let’s Encrypt) is that you need to prove ownership of whatever domain you want to use. I have santoshk.dev where you are reading this blog post. And I wanted to use the same for my services on the local network.

There are multiple ways you can prove the ownership. I prefer the DNS method myself. I have Cloudflare as my DNS provider for my domain. Good thing for me, there is a plugin for certbot for Cloudflare. I simply have to get an API key from Cloudflare and use it to provide API keys to certbot for verification.

People don’t always have similar set-up as mine. In that case, the certbot documentation on getting certificates is the savior.

If everything is correct, this command will do the work:

sudo certbot certonly --dns-cloudflare --dns-cloudflare-credentials /path/to/cloudflare.ini -d "*.santoshk.dev"

The certificates and the related files are stored in specific locations. The path I have been using inside ssl_certificate etc is automatically generated based on domains I passed to certbot in above command.

At this point, certbot will also take care of renewing the certs by registering a cron job. Your domains should have https by now.

If you want a more detailed video on this process, please let me know in the comments.

OpenMediaVault

When you delve into self-hosting, you won’t take much time until you need a NAS or a Network Attached Storage. Basically, this program runs on a host and exposes a persistent filesystem over the network.

The benefit is, you can use that storage from any host in your network. I have been using my NAS with my phone, laptop, and desktop altogether. This NAS can also be used in conjunction with services running in the cluster.

But how to get it set up? I won’t take much time on this one, as I have followed a tutorial and would recommend watching that video directly. Here is the link: https://www.youtube.com/watch?v=gyMpI8csWis

So in a nutshell, I have a hard drive attached to my Raspberry Pi via USB. I have used OpenMediaVault to expose that drive over the network.

With that said, it is the end of this section where we discussed about service running outside docker. These were some basic services that I have kept on the host system.

Services inside Swarm

Most of my services are being run on a docker subsystem called Swarm. Any docker container that can run with a docker-compose file can also run in a swarm. We get the added benefit of scaling and learning. This knowledge will be transferrable when we are learning Kubernetes.

With Docker Swarm setup, which is very easy as we have already seen in previous sections. The first service I installed was Portainer. I already talked about it in the last section. The installation steps are pretty simple. You need to run the following command on the master swarm node:

docker volume create portainer_data
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest

This will automatically install a portainer-agent on the master node as well as other nodes you add to the cluster in future.

Running this container will expose port 9443, and can be visited from a browser using :9443.

As I have already configured nginx for the reverse proxy to portainer, I can access my portainer UI at https://portainer.santoshk.dev.

As an anecdote, when I first installed portainer, I was exposed to things in Docker I didn’t know existed. Hope you are going to have the same experience and use Docker more efficiently. I think I already mentioned this; there are alternatives to Portainer, such as Swarmit.

Postgres and pgadmin

I already talked about Postgres. But let’s have a look at the compose file to see what additional things we have from a conventional compose file. Please note that this distinction makes a compose file into a stack file.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
version: '3.9'

services:
  master:
    image: postgres:16
    restart: always
    environment:
      POSTGRES_DB: example
      POSTGRES_USER: example
      POSTGRES_PASSWORD: example
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
    deploy:
      placement:
        constraints:
          - node.hostname == pioneer

  pgadmin:
    container_name: pgadmin
    image: dpage/pgadmin4
    depends_on:
      - postgres
    ports:
      - "5050:80"
    environment:
      POSTGRES_USER: santosh
      PGADMIN_DEFAULT_EMAIL: [email protected]
      PGADMIN_DEFAULT_PASSWORD: example
    restart: unless-stopped

volumes:
  postgres_data:

What is different?

1
2
3
4
    deploy:
      placement:
        constraints:
          - node.hostname == pioneer

This pins the master service with the postgres container on a host called pioneer. If you don’t do this, in conjunction with having multiple worker nodes. If postgres somehow crashes, it will start back on any node. Not good, as the container will create another volume on that node. The data is simply lost from the perspective of postgres service. There are more ways to deal with this than pinning. But that’s for some other post in the future.

Hosts can crash in multiple ways. For me, it is shutting down of host in most cases.

I have also used volumes so that even if the postgres crashes, it does not lose any data. Volumes are there even after the container is rebuilt.

The second service on this stack is a web UI for postgres. And for this container, I don’t care which host it got deployed on. It’s not situation-critical and will connect to the same postgres host even if run on another host.

Prometheus and Grafana - infrastructure monitoring

When you have a lot of machines (for me 3 was enough), you want to know about their health. I mean a htop usually works, but getting to know all of the hosts at the same place is always desirable.

There are different use cases of Prometheus and Grafana. You can use them to monitor services running your swarm cluster itself. As well as services in k8s. I also use it for speedtest to keep an I if ISP is up to SLA. But these are out of the scope of this article. What I’m using them is to monitor hosts.

What I’m doing is known as infrastructure monitoring, and what is described in the previous sentence is application monitoring.

And with that said, let’s see how I had it set up.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
version: '3.8'

services:
  prometheus:
    image: prom/prometheus
    ports:
      - "9090:9090"
    volumes:
      - /etc/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus-data:/prometheus
    deploy:
      placement:
        constraints:
          - node.hostname == pioneer
  grafana:
    image: grafana/grafana
    container_name: grafana
    ports:
      - "3000:3000"

    volumes:
      - grafana-data:/var/lib/grafana
    deploy:
      placement:
        constraints:
          - node.hostname == pioneer

volumes:
  prometheus-data:
  grafana-data:

Some points I want to emphasize:

  1. I had to pin both services because of the volumes. They both need to interact with the filesystem which is provided through volumes. I need to find a way to store volumes on NAS or something. I’ll note this and work on this as I progress.
  2. I have decided to manage /etc/prometheus/prometheus.yml from the host machine. This is not necessary, and there are other ways you can set this up. If you are feeling excited, learn about docker configs. It has a great interface on Portainer.

Here is a stripped-down version of the config I am using:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
global:
  scrape_interval: 1m


scrape_configs:
  - job_name: prometheus
    static_configs:
      - targets: ['10.100.200.10:9090']
  - job_name: titan
    static_configs:
      - targets: ['10.100.200.30:9100']
  - job_name: voyager
    static_configs:
      - targets: ['10.100.200.35:9100']
  - job_name: pioneer
    static_configs:
      - targets: ['10.100.200.10:9100']
  - job_name: speedtest
    scrape_interval: 60m
    scrape_timeout:  60s
    static_configs:
      - targets: ['10.100.200.35:9090']

You can see 2 top-level blocks here. The first one is self-explanatory.

The scrape_configs might require some introduction.

  1. job_name is how it’s going to look in the UI such as Grafana.
  2. targets in static_config declares where to reach for data scraping. Although targets is a list, for our use case, there is going to be only one host in it.

You may also notice that titan, voyager, and pioneer have something in common; their port. And what is happening is I have installed node-exporter on those nodes. You may want to consult their documentation. They are easily available in the package repository of many distros.

Although I have installed it this way. It initially worked for a few weeks until it started showing partial data.

At this moment my setup is the same, but I also have a grafana instance hosted on the cloud. Writing the process down will make this post a lot longer. It involves adding a top-level block called remote_write. I’ll leave this task to you. But I hope the issue I am having does not happen to you in the first place.

Vaultwarden - password management

Passwords are the next thing I’m self-hosting. I don’t want to store my password with some other entity. I would rather store it with me and make accessing it hard.

I have tried 1Password and BitWarden before. I’m not sure if I found a self-hosted solution for 1Password, but I ended up installing vaultwarden.

Vaultwarden is a drop-in replacement for bitwarden server. The latter one is official from the original creators, and the former one is community-developed. Both of them are compatible with the extensions that are installed on the browser. I have been using it with Firefox as well as Brave/Chromium.

I will present a basic swarm stack definition because I don’t want to get into the technical details of security.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
version: '3.8'

services:
  server:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: unless-stopped
    ports:
      - "8088:80"
    volumes:
      - vw-data:/data
    deploy:
      placement:
        constraints:
          - node.hostname == pioneer

volumes:
  vw-data:

Not so complicated definition file if you compare it to the other two. You should also protect your data at rest in case someone gets unauthorized access to your machine.

Vikunja - task management

I don’t need to tell you about this one. Especially if you are a programmer, you know the importance of so-called “issues” and “kanban” boards on a project. I use Vikunja for the same interface.

Vikunja was the most satisfying self-hosted project of this kind. Before this, I tried Planka but found Vikunja more feature-rich.

Let’s see the deployment.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
version: '3.9'

services:
  vikunja:
    image: vikunja/vikunja
    environment:
      VIKUNJA_SERVICE_PUBLICURL: https://vikunja.santoshk.dev
      VIKUNJA_DATABASE_HOST: pioneer.santoshk.dev
      VIKUNJA_DATABASE_PASSWORD: example
      VIKUNJA_DATABASE_TYPE: postgres
      VIKUNJA_DATABASE_USER: example
      VIKUNJA_DATABASE_DATABASE: vikunja
      VIKUNJA_SERVICE_JWTSECRET: a-super-secure-random-secret
    ports:
      - 4444:3456
    volumes:
      - vikunja_files:/app/vikunja/files
    restart: unless-stopped

volumes:
  vikunja_files:

This deployment is a little bit interesting. Although Vikunja requires Postgres, we don’t have any postgres service running as suggested in the official documentation.

This is because we are using the postgres installation that I told you I use for anything and everything else on my homelab.

What I have found while working in this manner is database is not created when it is first required. For this, I have to docker exec into the running container, then psql to connect to the server, and then CREATE DATABASE vikunja; manually.

Also, note that I have put in dummy credentials and a user. Please use secure credentials.

Also, this way of putting credentials in a compose file is risky. You should be using environment variables or secrets. Both are docker-compose things, and you should learn about them.

I have not delved into that part right now. But when I do, I’ll let you know. So please subscribe to Fullstack with Santosh.

Other services

There are more services that I have tried, but I don’t use them regularly. Either it does not solve any problem I have. Is not on this swarm cluster, or interferes with apps that I already have.

NextCloud - Google Workspace alternative

Initially installed on my PC itself for testing. Then moved to swarm after understanding the configuration.

Not a lot of services are using it.

ntfy - a producer/subscriber-based notification system

ntfy integrates with many CI-based applications in the community. You can simply publish a message to a certain topic using as primitive as curl. Clients subscribed to that topic will receive those messages.

freshrss - an RSS reader

Gave it a try, but not really an RSS reader guy.

Conclusion and Future Plans

This post was mostly aimed toward the services and configuration rather than hardware. That’s because I didn’t have much hardware when I started writing this post. But as I’m progressing towards completion of this post, I have already got some pieces of hardware.

This includes:

  1. Dell Optiplex 7040 - Got this mini PC as an alternative to Raspberry Pi. I wanted to add more nodes as a worker to my swarm and Kubernetes cluster. I have already got this and have set up Proxmox for virtualization. It has been going smoothly till now.
  2. TP-Link SG1016PE - It is a 16-port switch with 8 port PoE and 50 meters of CAT6 cables. Got this because I wanted to have all devices that are capable of having a wired internet connection. Got PoE one because I want to try out PoE gadgets. This switch and cable are accompanied by a crimping toolkit. Learning to make ethernet cables was very satisfying.
  3. TP-Link VIGI C300HP - First candidate to utilize the PoE port on my switch.
  4. NVIDIA RTX 4060 Ti 16GB - This dude is responsible for doubling the electricity consumption of my lab. I got this one to experiment with AI developments going on.
  5. Mercusys MR70X - Got this router because it was OpenWrt compatible.

Finally, I would once again ask you to subscribe to my channel, and have a nice day.

Share on

Santosh Kumar
WRITTEN BY
Santosh Kumar
Santosh is a Software Developer currently working with NuNet as a Full Stack Developer.