This page looks best with JavaScript enabled

Monitoring My HomeLab With Prometheus and Grafana

 ·   ·  ☕ 7 min read

Introduction

I have enough machines in my house that I love to call the setup a home lab. Though I’m not saying my setup is as cool as the one you find on r/homelab. But I have my laptop which I have been using for 3 years now. I also bought a Raspberry Pi 4 back at the end of 2022. Recently I’ve built a PC for which I’m yet to upload video.

That’s enough number of devices that I want to know more about at a single place. The thing in my favor is Raspi, which can be up 24x7 without taking much power. This is going to be my server for telemetry.

If you are following along with this setup, you need to have at least 2 hosts. I have my raspi and Arch PC with me. Make sure you take note of the IP addresses of these hosts. It will be better if you configure a static IP for each of those devices so they don’t create problems.

I prefer to set static IP from my router. How you set it depends on the make and model of your router. You can consult your product’s documentation.

Step 1: Prometheus

There is this project which I came across which is lightweight and serves the purpose.

I headed over to the Getting Started with Prometheus page and followed the steps to source data. Once everything is running, you should be able to see Prometheus UI at localhost:9090.

While above mentioned method is sufficient for testing, it requires some plumbing to set up the systemd files which enables Prometheus to start the collector daemon as soon as the system starts. We can actually write your own systemd files for the binary you downloaded, there are a lot of online resources available for that. But if you are lazy, keep reading.

My final target to install Prometheus was my Raspi host. It has Raspberry Pi OS, which is basically Debian bookworm. If you have the same setup, I’d recommend installing it from the system package manager.

sudo apt install prometheus

Don’t forget to enable the Prometheus service to start at system boot.

sudo systemctl enable prometheus
sudo systemctl start prometheus

At this point, you should be able to access the Prometheus UI at <raspi-ip:9090>. Here is what it looks like to me. I have actually gone a step further and looked up a sample query to run and plot the graph on.

Prometheus Query with Graph
Prometheus Query with Graph

Adding scrape targets

So Prometheus is doing its work. To start with, I’m going to erase everything and have this config in my /etc/prometheus/prometheus.yml:

1
2
3
4
5
6
7
8
global:
 scrape_interval: 15s

scrape_configs:
- job_name: "prometheus"
 static_configs:
 - targets:
 - localhost:9090

Pay attention to the job_name and targets. We’ll be referencing them in future sections. Going forward, I’ll be using each hostname as a value for job_name. And targets will be the IP of that node. But let’s not worry about it right now.

Don’t forget to reload the service after changing the configuration:

sudo systemctl reload prometheus

The configuration above is scraping data from the Prometheus instance itself. And for each job_name you add here, you should see the respective entry on the Targets page http://<raspi-ip>:9090/classic/targets:

Prometheus Targets
Prometheus Targets

We are going to add more targets for each host we want to keep track of. But are we going to install Prometheus on all the hosts? Not actually!

Step 2: node-exporter

node-exporter is a kind of exporter; the exporter exports certain data from the host to Prometheus. exporters can different kinds of collectors. collectors are not limited to hosts/machines because there are various integrations possible. But today, we are working with node-exporter.

node-exporter exposes information about a system in general. Information such as CPU, network, disk, and memory usage. This is what I want as a starter kit.

Installation methods are again the same for raspi:

sudo apt install prometheus-node-exporter
sudo systemctl enable prometheus-node-exporter
sudo systemctl start prometheus-node-exporter

We don’t need any fancy configuration for node exporter. I’m also going to configure node exporter on other machines.

On Arch:

yay -S prometheus-node-exporter
sudo systemctl enable prometheus-node-exporter
sudo systemctl start prometheus-node-exporter

I’m silently going to follow similar steps on my Kubuntu machine.

Now the time is to update the Prometheus config on the raspi:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
global:
 scrape_interval: 15s

scrape_configs:
- job_name: "prometheus"
 static_configs:
 - targets:
 - localhost:9090

- job_name: "archlinux"
 static_configs:
 - targets:
 - 192.168.0.104:9100

- job_name: "raspberrypi"
 static_configs:
 - targets:
 - localhost:9100

job_name is used to identify the host in Grafana. targets is the IP. The node-exporter listens on port 9100.

At this point, the prometheus job can be removed, because we already installed node-exporter on it, and Prometheus data is not relevant to us.

The yaml is stripped down to this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
global:
 scrape_interval: 15s

scrape_configs:
- job_name: "archlinux"
 static_configs:
 - targets:
 - 192.168.0.104:9100

- job_name: "raspberrypi"
 static_configs:
 - targets:
 - localhost:9100

Let’s move on to the most interesting part that we were waiting for, which is visualization.

Step 3: Grafana

We are about to see the most beautiful part, the visualization of collected system information over time.

Again, on Debian, I followed these steps: Install Grafana on Debian or Ubuntu

Don’t forget to start the systemd service:

sudo apt-get install grafana
sudo systemctl enable grafana-server
sudo systemctl start grafana-server
sudo systemctl status grafana-server

Login to the grafana dashboard. For me, it was 192.168.0.100:3000. Use admin as username and password. The installer will ask to change the password after first use.

Add Source

Head over to the Data sources section e.g. http://192.168.0.100:3000/connections/datasources and add a Prometheus instance.

Grafana Add Datasource
Grafana Add Datasource

In the screen that appears afterward, only the server URL field is required. But I change the name for organizational purposes. I have left every other setting are their default.

Add Dashboard

  1. Head over to add the dashboard section.
  2. Look for a button saying “New”. Click that and select the option “Import”.
  3. In the form that appears, use 1860 in the ID.
  4. In the form that appears, you’ll be prompted to choose the Prometheus instance. Go ahead and choose the one we set up.

Troubleshooting

You should be now in a shape to view your dashboard. If not, here are some places to look at:

  • Check if the data source is correctly set up and running. And you are specifying the correct port in Grafana UI.
  • Make sure Prometheus config at /etc has the correct hostname/IPs. If you missed setting up static IP for your machine, it will be tricky.
  • If you have a firewall enabled on any of your hosts, please allow port 9100 used by node-exporter from the host which has Prometheus on.

Dashboard Overview

The dashboard we imported takes the data from the node exporter and visualizes it using graphs and colors.

Grafana Dashboard Overview
Grafana Dashboard Overview

After having a glance.. do you remember the time when I asked you to pay attention to the job_name and targets?

In the dashboard UI, they are available as Job and Host. Look at the top of the screenshot, just below the navbar breadcrumb. You can use the Job dropdown to switch to the different hosts you have set the node exporter on.

And that was pretty much it for this article.

Conclusion

Doing DIY stuff is fun and a great way to learn. I recommend having a Raspberry with any tech enthusiast. It provides an opportunity to self-host stuff and learn from the process of setting it up and maintaining it. By maintenance, I mean regular backup of generated data, etc.

I’m hoping to do more with my raspi in the future. And Prometheus-Grafana duo has provided the base I was missing.

I’ll share my journey with you as I go. So stay tuned.

Share on

Santosh Kumar
WRITTEN BY
Santosh Kumar
Santosh is a Software Developer currently working with NuNet as a Full Stack Developer.