It's kind of a surprise to me that we can not create any new Grafana dashboard in the out-of-box Grafana instance from OpenShift 4.2. Check out the manual,

The Grafana instance that is provided with the monitoring stack, along with its dashboards, is read-only.

Ok. That’s why. So we have to bring in our own Grafana instance to visualize the monitoring data. Naturally, we could use the Grafana Operator from the OperatorHub and the data source is the Prometheus from OCP.

The Prometheus container is wrapped inside the pod and the port (9090) is bind to the localhost only. The Prometheus service has to be accessed through the sidecar container, prometheus-proxy , so that the access control can be applied. We don’t want to change the port binding to all the interfaces to break any security design.

Instead, we could use the bearer token authentication in the HTTP header, which is available in Grafana 6.3 onwards (based on Grafana website documentation).

However, the current Grafana Operator from the OCP web console indicates that the Grafana Operator is at version 2.0.0. The Grafana version associated with it doesn’t have the custom HTTP header option. We could not use the standard Operator Lifecycle Manager (OLM) way to install and manage the operator.

Since OpenShift is an extended Kubernetes solution, the original concept of Kubernetes can still be applied.

1. Install Grafana Operator without OLM

Clone the grafana operator repo, create the required Kubernetes’ objects

git clone https://github.com/integr8ly/grafana-operator.git kubectl create namespace grafana

kubectl create -f deploy/crds

kubectl create -f deploy/roles -n grafana

kubectl create -f deploy/cluster_roles

kubectl create -f deploy/operator.yaml -n grafana

2. Deploy a Grafana Instance

Once the operator installed, we can apply a Grafana CRD and the operator will create the Grafana deployment accordingly.

apiVersion: integreatly.org/v1alpha1

kind: Grafana

metadata:

name: grafana

namespace: grafana

spec:

ingress:

enabled: True

config:

log:

mode: "console"

level: "debug"

security:

admin_user: "root"

admin_password: "secret"

auth:

disable_login_form: False

disable_signout_menu: True

auth.anonymous:

enabled: False

dashboardLabelSelector:

- matchExpressions:

- {key: app, operator: In, values: [grafana]}

Now the Grafana Pod will be running. We need to configure the Prometheus data source.

3. Create a Default Prometheus DataSource

Let's create a service account and we will use its token for authentication purpose to the Prometheus sidecar in OCP4.2

oc -n grafana create sa prometheus-reader oc -n grafana adm policy add-cluster-role-to-user view -z prometheus-reader

Instead of creating YAML, we use the oc command line tool to bind the cluster role to the service account.

Similarly, with the oc tool, retrieve the token.

oc -n grafana serviceaccounts get-token prometheus-reader eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9....Skipped...

Now we could use the custom HTTP header feature to authenticate the access to the OCP Prometheus. Create the following CRD,



kind: GrafanaDataSource

metadata:

name: grafanadatasource-prometheus

namespace: grafana

spec:

name: grafanadatasource-prometheus.yaml

datasources:

- name: Prometheus

type: prometheus

access: proxy

url: "

basicAuth: false

withCredentials: false

isDefault: true

version: 1

editable: true

jsonData:

tlsSkipVerify: true

timeInterval: "5s"

httpHeaderName1: "Authorization"

secureJsonData:

httpHeaderValue1: "Bearer eyJh....Skipped..." apiVersion: integreatly.org/v1alpha1kind: GrafanaDataSourcemetadata:name: grafanadatasource-prometheusnamespace: grafanaspec:name: grafanadatasource-prometheus.yamldatasources:- name: Prometheustype: prometheusaccess: proxyurl: " https://prometheus-k8s.openshift-monitoring.svc:9091 basicAuth: falsewithCredentials: falseisDefault: trueversion: 1editable: truejsonData:tlsSkipVerify: truetimeInterval: "5s": "Authorization"secureJsonData:: "Bearer eyJh....Skipped..."

Apply it, once the Grafana pod restarted, we will get full-blown Grafana running. A sample of the Grafana status dashboard is added and shown as below.