The OpenShift deployment is similar to Kubernetes, but with some extra security steps required.
The Instana Agent can be installed into OpenShift by following the steps below:
The only thing that needs to be changed in the file is
BASE64_ENCODED_INSTANA_KEY. The variable is a base64 encoded Instana key for the cluster to which the generated data should be sent.
For example your:
echo YOUR_INSTANA_AGENT_KEY | base64
DaemonSet and set the policy permissions.
oc login -u system:admin oc new-project instana-agent oc create serviceaccount instana-admin oc adm policy add-scc-to-user privileged -z instana-admin
By default the Instana Agent
DaemonSet will start up on all nodes tagged with
type=infra. Tag the nodes you want to Agent to run on:
oc label node my-node type=infra
OpenShift 3.9 needs additional annotation in order to match node selectors:
oc annotate namespace instana-agent openshift.io/node-selector=""
instana-agent.yaml file looks like the following:
apiVersion: v1 kind: Secret metadata: name: instana-agent-secret type: Opaque data: key: # echo YOUR_INSTANA_AGENT_KEY | base64 --- apiVersion: v1 kind: ConfigMap metadata: name: instana-configuration data: configuration.yaml: | # Manual a-priori configuration. Configuration will be only used when the sensor # is actually installed by the agent. # The commented out example values represent example configuration and are not # necessarily defaults. Defaults are usually 'absent' or mentioned separately. # Changes are hot reloaded unless otherwise mentioned. # It is possible to create files called 'configuration-abc.yaml' which are # merged with this file in file system order. So 'configuration-cde.yaml' comes # after 'configuration-abc.yaml'. Only nested structures are merged, values are # overwritten by subsequent configurations. # Secrets # To filter sensitive data from collection by the agent, all sensors respect # the following secrets configuration. If a key collected by a sensor matches # an entry from the list, the value is redacted. #com.instana.secrets: # # One of: 'equals-ignore-case', 'equals', 'contains-ignore-case', 'contains', 'regex' # matcher: 'contains-ignore-case' # list: # - 'key' # - 'password' # - 'secret' # Host #com.instana.plugin.host: # tags: # - 'dev' # - 'app1' # Hardware & Zone #com.instana.plugin.generic.hardware: # enabled: true # disabled by default # availability-zone: 'zone' --- kind: ClusterRole apiVersion: v1 metadata: name: instana-agent-role rules: - nonResourceURLs: - "/version" - "/healthz" verbs: ["get"] - apiGroups: ["batch"] resources: - "jobs" verbs: ["get", "list", "watch"] - apiGroups: ["extensions"] resources: - "deployments" - "replicasets" - "ingresses" verbs: ["get", "list", "watch"] - apiGroups: ["apps"] resources: - "deployments" - "replicasets" verbs: ["get", "list", "watch"] - apiGroups: ["apps.openshift.io"] resources: - "deploymentconfigs" verbs: ["get", "list", "watch"] - apiGroups: [""] resources: - "namespaces" - "events" - "services" - "endpoints" - "nodes" - "pods" - "replicationcontrollers" - "componentstatuses" - "resourcequotas" verbs: ["get", "list", "watch"] - apiGroups: [""] resources: - "endpoints" verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: v1 metadata: name: instana-agent-role-binding namespace: instana-agent subjects: - kind: ServiceAccount name: instana-admin namespace: instana-agent roleRef: kind: ClusterRole name: instana-agent-role --- apiVersion: apps/v1 kind: DaemonSet metadata: name: instana-agent spec: selector: matchLabels: app: instana-agent template: metadata: labels: app: instana-agent spec: nodeSelector: type: infra serviceAccountName: instana-admin hostIPC: true hostNetwork: true hostPID: true containers: - name: instana-agent image: instana/agent imagePullPolicy: Always env: - name: INSTANA_AGENT_LEADER_ELECTOR_PORT value: "42655" - name: INSTANA_ZONE value: your-k8s-cluster - name: INSTANA_AGENT_ENDPOINT value: Enter the host your agent will connect to. # Europe: saas-eu-west-1.instana.io or U.S./Rest of the World: saas-us-west-2.instana.io - name: INSTANA_AGENT_ENDPOINT_PORT value: "443" - name: INSTANA_AGENT_KEY valueFrom: secretKeyRef: name: instana-agent-secret key: key - name: JAVA_OPTS # Approximately 1/3 of container memory limits to allow for direct-buffer memory usage and JVM overhead value: "-Xmx170M -XX:+ExitOnOutOfMemoryError" - name: INSTANA_AGENT_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name securityContext: privileged: true volumeMounts: - name: dev mountPath: /dev - name: run mountPath: /run - name: var-run mountPath: /var/run - name: sys mountPath: /sys - name: log mountPath: /var/log - name: machine-id mountPath: /etc/machine-id - name: configuration subPath: configuration.yaml mountPath: /root/configuration.yaml livenessProbe: httpGet: # Agent liveness is published on localhost:42699/status path: /status port: 42699 initialDelaySeconds: 75 periodSeconds: 5 resources: requests: memory: "512Mi" cpu: "0.5" limits: memory: "512Mi" cpu: "1.5" ports: - containerPort: 42699 - name: instana-agent-leader-elector image: instana/leader-elector:0.5.4 env: - name: INSTANA_AGENT_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name command: - "/app/server" - "--election=instana" - "--http=localhost:42655" - "--id=$(INSTANA_AGENT_POD_NAME)" resources: requests: cpu: "0.1" memory: "64Mi" livenessProbe: httpGet: # Leader elector liveness is tied to Agent, published on localhost:42699/status path: /status port: 42699 initialDelaySeconds: 75 periodSeconds: 5 ports: - containerPort: 42655 volumes: - name: dev hostPath: path: /dev - name: run hostPath: path: /run - name: var-run hostPath: path: /var/run - name: sys hostPath: path: /sys - name: log hostPath: path: /var/log - name: machine-id hostPath: path: /etc/machine-id - name: configuration configMap: name: instana-configuration
The following container environment variables might need to be adjusted.
INSTANA_AGENT_ENDPOINT- IP address or hostname associated with the installation.
INSTANA_AGENT_ENDPOINT_PORT- port associated with the installation.
Depending on your deployment type (SaaS or On-Premises) and region you will need to set the agent end-points appropriately. For additional details relating to the agent end-points please see the Agent Configuration.
In addition it is recommended to specify the zone or cluster name for resources monitored by this agent daemonset
INSTANA_ZONE- used to customize the zone grouping on the infrastructure map (See https://docs.instana.io/quickstart/agentconfiguration/#custom-zones). Also sets the default name of the cluster.
INSTANA_KUBERNETES_CLUSTER_NAME- customised name of the cluster monitored by this daemonset
Note: For most users, it only necessary to set the
INSTANA_ZONE variable. However, if you would like to be able group your hosts based on the availability zone rather than cluster name, then you can specify the cluster name using the
INSTANA_KUBERNETES_CLUSTER_NAME instead of the
INSTANA_ZONE setting. If you omit the
INSTANA_ZONE the host zone will be automatically determined by the availability zone information on the host.
Depending on your OpenShift environment you might need do some customizing.
If you can’t pull docker images from the Docker hub you would need to add two image streams for the images we are using. Open up the OpenShift Container Registry, go to the
instana-agent namespace and add the following image streams:
Name: instana-agent Image: instana/agent
The resulting image stream should be:
Name: leader-elector Image: gcr.io/google-containers/leader-elector
The resulting image stream should be:
Use the respective new image streams in the YAML.
With the node-selector you can specify where the instana-agent
DaemonSet should be deployed. Note that every worker host should have an agent install. If you configure the node-selector check if there are any conflicts with project
nodeSelector defined in
With using the
ConfigMap you can setup agent configuration that is necessary for proper monitoring.
See Kubernetes secrets for more details.