Installing InfluxDB
As discussed earlier, InfluxDB is a Time Series Database. It is part of the TICK stack, made by InfluxData. For more information, check out https://www.influxdata.com/time-series-platform/influxdb. In this section, we will only look at InfluxDB and not at the other components of the stack such as Telegraf, Kapacitor or Chronograph.
Now you might wonder what is a time series database. Obviously, it is a datastore optimized for timestamped data. Such a database is very useful in IoT scenarios. InfluxDB is not the only time series database. A list of other such databases can be found on Wikipedia.
Basic Installation
We will install InfluxDB using Helm and explore the resulting configuration. The chart can be found at https://github.com/kubernetes/charts/tree/master/stable/influxdb. A basic installation can be done with the following command:
InfluxDB will be installed and reachable from within the cluster at http://db-influxdb.default:8086. Kubernetes DNS is used here to resolve db-influxdb.default to the IP address of the pod that hosts the InfluxDB container. For pods in the default namespace, you can also omit the .default suffix. Using Kubernetes DNS is just one way of finding a service in a Kubernetes cluster. We will explore some other ways later.
Next, we want to connect to InfluxDB with the influx client on our local machine. Use the following two commands to download the InfluxDB binaries for Linux and unpack them:
Next, find the influx executable in ./influxdb-1.4.3-1/usr/bin/influx and copy it to /usr/bin. If you run the client, it will try to connect to localhost:8086. Let's use Kubernetes port forwarding to be able to connect to our InfluxDB instance:
The above command is a fancy way to forward port 8086 on your local machine to port 8086 of the InfluxDB pod. It gets the name of the pod by running kubectl get pods and extracting the pod name. If you don't feel like typing such a command, and who does, just type kubectl get pods and use the pod name directly like so:
You can now run influx and it should connect to your InfluxDB instance in your cluster. Let's try a command:
We have not created any databases yet, so you will only see _internal.
Besides the pod, the InfluxDB Helm chart created some other resources:
A Deployment and ReplicaSet called db-influxdb that control how InfluxDB is updated and run on the cluster
A Service called db-influxdb with just a ClusterIP which provides the ability to connect to InfluxDB from within the cluster using the DNS name that we discussed earlier
For a database, you would expect a persistent volume to be created but that is not the case. You have to specify additional parameters in your helm command or use a values.yaml file.
Custom Installation
Starting from the values.yaml file at https://github.com/kubernetes/charts/blob/master/stable/influxdb/values.yaml, make the following changes:
For Azure, also set StorageClass to default.
Now, install InfluxDB with your custom values.yaml file (use helm delete db --purge first). Use the following command from the folder that contains values.yaml:
When you run kubectl get pv, you should see a 10GB volume. When you run kubectl describe pv name-of-your-volume you get something like:
You can see that this Kubernetes cluster is using Google's Kubernetes Engine and that the disk of type GCEPersistentDisk was provisioned automatically.
In one of the following sections, we will add some code to forward data from Mosquitto to InfluxDB. We will need a database to store those measurements. We could create the database from code when we connect to InfluxDB (and we will) but for now, we will automatically set it during deployment. The Docker image being used by the Helm Chart has many configuration options, one of which can create a database. In values.yaml set the following:
Now, when you deploy the chart, the telemetry database will be created:
Next, we will enable authentication to InfluxDB. Several changes need to be made to values.yaml. Find auth_enabled: false under config.http and change it to auth_enabled: true. Next, find the following lines:
This will create a Kubernetes job that uses curl to create the user. A bit below these two lines, you will find options to set the username and password.
When you make the authentication changes and install the chart, you will notice that it takes a while until you get back at the prompt. That is because Helm only finishes when the Kubernetes job finishes and that job needs to wait for InfluxDB to spin up. Also note that the job is not removed when you delete the chart.
The username and password are stored in a Kubernetes secret called db-influxdb-auth. To see the username and password use the following command:
The output is as follows:
I set my username to admin and the password to test. You do not see that in the above file because those values have been base64 encoded. To see the decoded value, use:
You should see the word test printed. We can now use this username and password in the influx client. When you start the client, before you start issuing commands, type auth followed by the username and password.
Although there are other authentication options to set, we will leave it like this and use the admin account to connect to InfluxDB. Now that we have our database, it is time to get events flowing from Mosquitto to InfluxDB. But before we do that, we need to learn some InfluxDB basics.
Last updated
Was this helpful?