There are different ways of installing the instana-agent onto a Kubernetes cluster. While having the Agent installed manually directly on the host will do the job of monitoring containers and processes on that host it will not be able to collect Kubernetes data. To have full experience of monitoring Kubernetes cluster with Kubernetes data we recommend to use the below documented ways to deploy the agent through the Helm Chart or a YAML file.
To ease up installation we have written a Helm chart which packages and pre-configures the needed Kubernetes resources. For instructions on how to install it with helm please visit helm/charts/stable/instana-agent
To install and configure Instana within your Kubernetes Cluster as a DaemonSet you simply need to define the DaemonSet YAML file and execute the below command. Below is an example of a YAML file to run the Instana agent. Once the YAML file is customized you can simply start it through kubectl:
kubectl apply -f instana-agent.yaml
As with all deployments in Kubernetes, it’s a really good idea to make use of namespaces to keep things organized. The following YAML creates a
instana-agent in which the
DaemonSet will be created. This allows you to tag the agents to isolate them or even stop all of them at once by simply deleting the
You need to change the
BASE64_ENCODED_INSTANA_KEY in the file below. The variable is a base64 encoded Instana key for the cluster to which the generated data is sent.
To get your Instana Agent key base64 encoded simply run:
echo YOUR_INSTANA_AGENT_KEY | base64
The following container environment variables might need to be adjusted.
INSTANA_AGENT_ENDPOINT- IP address or hostname associated with the installation.
INSTANA_AGENT_ENDPOINT_PORT- port associated with the installation.
Depending on your deployment type (SaaS or On-Premises) and region you will need to set the agent end-points appropriately. For additional details relating to the agent end-points please see the Agent Configuration.
In addition it is recommended to specify the zone or cluster name for resources monitored by this agent daemonset
INSTANA_ZONE- used to customize the zone grouping on the infrastructure map (See https://docs.instana.io/quickstart/agentconfiguration/#custom-zones). Also sets the default name of the cluster.
INSTANA_KUBERNETES_CLUSTER_NAME- customised name of the cluster monitored by this daemonset
Note: For most users, it only necessary to set the
INSTANA_ZONE variable. However, if you would like to be able group your hosts based on the availability zone rather than cluster name, then you can specify the cluster name using the
INSTANA_KUBERNETES_CLUSTER_NAME instead of the
INSTANA_ZONE setting. If you omit the
INSTANA_ZONE the host zone will be automatically determined by the availability zone information on the host.
If you make make changes in ConfigMap for configuration.yaml, instana-agent DaemonSet needs to be recreated. Easiest way to apply the changes is to do following:
kubectl delete -f instana-agent.yaml kubectl apply -f instana-agent.yaml
Below is the example
instana-agent.yaml file to run the agent as a DaemonSet in Kubernetes:
apiVersion: v1 kind: Namespace metadata: name: instana-agent --- apiVersion: v1 kind: ServiceAccount metadata: name: instana-agent namespace: instana-agent --- apiVersion: v1 kind: Secret metadata: name: instana-agent-secret namespace: instana-agent type: Opaque data: key: # echo YOUR_INSTANA_AGENT_KEY | base64 --- apiVersion: v1 kind: ConfigMap metadata: name: instana-configuration namespace: instana-agent data: configuration.yaml: | # Example of configuration yaml template # Manual a-priori configuration. Configuration will be only used when the sensor # is actually installed by the agent. # The commented out example values represent example configuration and are not # necessarily defaults. Defaults are usually 'absent' or mentioned separately. # Changes are hot reloaded unless otherwise mentioned. # It is possible to create files called 'configuration-abc.yaml' which are # merged with this file in file system order. So 'configuration-cde.yaml' comes # after 'configuration-abc.yaml'. Only nested structures are merged, values are # overwritten by subsequent configurations. # Secrets # To filter sensitive data from collection by the agent, all sensors respect # the following secrets configuration. If a key collected by a sensor matches # an entry from the list, the value is redacted. #com.instana.secrets: # # One of: 'equals-ignore-case', 'equals', 'contains-ignore-case', 'contains', 'regex' # matcher: 'contains-ignore-case' # list: # - 'key' # - 'password' # - 'secret' # Host #com.instana.plugin.host: # tags: # - 'dev' # - 'app1' # Hardware & Zone #com.instana.plugin.generic.hardware: # enabled: true # disabled by default # availability-zone: 'zone' --- apiVersion: apps/v1 kind: DaemonSet metadata: name: instana-agent namespace: instana-agent spec: selector: matchLabels: app: instana-agent template: metadata: labels: app: instana-agent spec: serviceAccountName: instana-agent hostIPC: true hostNetwork: true hostPID: true containers: - name: instana-agent image: instana/agent imagePullPolicy: Always env: - name: INSTANA_AGENT_LEADER_ELECTOR_PORT value: "42655" - name: INSTANA_ZONE value: k8s-cluster-name - name: INSTANA_AGENT_ENDPOINT value: Enter the host your agent will connect to. # Europe: ingress-blue-saas.instana.io or U.S./Rest of the World: ingress-red-saas.instana.io - name: INSTANA_AGENT_ENDPOINT_PORT value: "443" - name: INSTANA_AGENT_KEY valueFrom: secretKeyRef: name: instana-agent-secret key: key - name: JAVA_OPTS # Approximately 1/3 of container memory limits to allow for direct-buffer memory usage and JVM overhead value: "-Xmx170M -XX:+ExitOnOutOfMemoryError" - name: INSTANA_AGENT_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP securityContext: privileged: true volumeMounts: - name: dev mountPath: /dev - name: run mountPath: /run - name: var-run mountPath: /var/run - name: sys mountPath: /sys - name: log mountPath: /var/log - name: var-lib mountPath: /var/lib/containers/storage - name: machine-id mountPath: /etc/machine-id - name: configuration subPath: configuration.yaml mountPath: /root/configuration.yaml livenessProbe: httpGet: # Agent liveness is published on localhost:42699/status path: /status port: 42699 initialDelaySeconds: 300 timeoutSeconds: 3 resources: requests: memory: "512Mi" cpu: "0.5" limits: memory: "512Mi" cpu: "1.5" ports: - containerPort: 42699 - name: instana-agent-leader-elector image: instana/leader-elector:0.5.4 env: - name: INSTANA_AGENT_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name command: - "/app/server" - "--election=instana" - "--http=localhost:42655" - "--id=$(INSTANA_AGENT_POD_NAME)" resources: requests: cpu: "0.1" memory: "64Mi" livenessProbe: httpGet: # Leader elector liveness is tied to Agent, published on localhost:42699/status path: /status port: 42699 initialDelaySeconds: 300 timeoutSeconds: 3 ports: - containerPort: 42655 volumes: - name: dev hostPath: path: /dev - name: run hostPath: path: /run - name: var-run hostPath: path: /var/run - name: sys hostPath: path: /sys - name: log hostPath: path: /var/log - name: var-lib hostPath: path: /var/lib/containers/storage - name: machine-id hostPath: path: /etc/machine-id - name: configuration configMap: name: instana-configuration --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: instana-agent-role rules: - nonResourceURLs: - "/version" - "/healthz" verbs: ["get"] - apiGroups: ["batch"] resources: - "jobs" verbs: ["get", "list", "watch"] - apiGroups: ["extensions"] resources: - "deployments" - "replicasets" - "ingresses" verbs: ["get", "list", "watch"] - apiGroups: ["apps"] resources: - "deployments" - "replicasets" verbs: ["get", "list", "watch"] - apiGroups: [""] resources: - "namespaces" - "events" - "services" - "endpoints" - "nodes" - "pods" - "replicationcontrollers" - "componentstatuses" - "resourcequotas" verbs: ["get", "list", "watch"] - apiGroups: [""] resources: - "endpoints" verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: instana-agent-role-binding namespace: instana-agent subjects: - kind: ServiceAccount name: instana-agent namespace: instana-agent roleRef: kind: ClusterRole name: instana-agent-role apiGroup: rbac.authorization.k8s.io
To be able to deploy for Kubernetes versions prior to 1.8 with RBAC enabled, replace
rbac.authorization.k8s.io/v1beta1for RBAC api version
You might need to grant your user the ability to create authorization roles. In GKE for example you can do this with following command:
kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin --user $(gcloud config get-value account)
If you don’t have RBAC enabled you will need to remove the
ClusterRoleBinding from the above configuration.
To enable a
PodSecurityPolicy for the Instana agent:
- Create a
PodSecurityPolicyresource as defined in our Helm chart.
- Authorize that policy in the
ClusterRole. Note that RBAC has to be enabled with the
ClusterRoleBindingresources created as defined in the aforementioned
- Enable the
PodSecurityPolicyadmission controller on your cluster. For existing clusters, it is recommended that policies are added and authorized before enabling the admission controller.
Some types of applications need to reach out to the agent first. Currently they are
- .NET Core
Those applications need to know on which IP the agent is listening. As the agent will listen on the host IP automatically, use the following Downward API snippet to pass it in an environment variable to the application pod:
spec: containers: env: - name: INSTANA_AGENT_HOST valueFrom: fieldRef: fieldPath: status.hostIP
Per default, we don’t schedule our agent on Kubernetes master nodes, as we respect the default taint
node-role.kubernetes.io/master:NoSchedule that is set on most master nodes. To overwrite these add the following toleration to the agent daemonset:
kind: DaemonSet metadata: name: instana-agent namespace: instana-agent spec: template: ... spec: tolerations: - key: "node-role.kubernetes.io/master" effect: "NoSchedule" operator: "Exists" ...
For guidelines on how to configure the Kubernetes NGINX Ingress and our Agent for capturing NGINX metrics, see the Monitoring NGINX page. Tracing of the Kubernetes NGINX Ingress is also possible via the OpenTracing project, see Distributed Tracing for NGINX Ingress on guidelines how to set that up.
Kubernetes has built-in support for storing and managing sensitive information. However, if you do not use that built-in capability but still need the ability to redact sensitive data in Kubernetes resources the agent secrets configuration is extended to support that.
To enable sensitive data redaction for selected Kubernetes resources (specifically annotations and container environment variables), set the
INSTANA_KUBERNETES_REDACT_SECRETS environment variable to
true as shown in the following agent yaml snippet:
spec: containers: env: - name: INSTANA_KUBERNETES_REDACT_SECRETS value: "true"
Then configure the agent with the desired list of secrets to match on as described in the agent secrets configuration.
It is important to note that enabling this capability can possibly cause a decrease in performance in the Kubernetes sensor.
To enable reporting to multiple backends from a Kubernetes agent, see the docker agent configuration.