Package Based Installation

Supported OS

  • Ubuntu 16.04
  • Debian 9.x
  • RedHat 7.2+
  • CentOS 7.x

Preparation

  • Setup mount points and external volumes.

    • Data Stores (defaults to /mnt/data)
    • Cassandra, should not utilize the same volume for production installs (defaults to /mnt/data)
    • Logs (defaults to /mnt/logs).
  • Generate a HTTPS TLS certificate for the UI and key (should be signed).

    • server.crt, server.key to be placed in /etc/instana example (please replace with the actual hostname of the machine):
    openssl req -x509 -newkey rsa:2048 -keyout server.key -out server.crt -days 365 -nodes -subj "/CN=<hostname>"
    

Automated Installation

  1. Run the following command line as root user, or equivalent permissions, replacing the placeholder at the end with your actual agent-key:

    curl -o setup_product.sh https://setup.instana.io/product && chmod 700 ./setup_product.sh && ./setup_product.sh -a <customer_agent_key>
    
  2. Edit /etc/instana/settings.yaml by adding customer specific values (see Configuration for details).

  3. Run the initialisation to setup all the necessary services:

    instana-init

  4. Done - all the services should be up and running.

NOTE: ./setup_product.sh -p -a will setup the pre-release repository (-y will bypass prompts).

Manual Ubuntu/Debian

  1. Execute the following lines as root or with equivalent permissions:

    echo "deb [arch=amd64] https://_:<customer_agent_key>@packages.instana.io/release/product generic main" > /etc/apt/sources.list.d/instana-product.list
    wget -qO - "https://packages.instana.io/Instana.gpg" | apt-key add -
    apt-get update
    apt-get install instana-common
    
  2. Edit /etc/instana/settings.yaml adding customer specific values (see Configuration for details).

  3. Run:

    instana-init

  4. Done - all the services should be up and running.

Manual RedHat/CentOS

  1. Execute the following lines as root or with equivalent permissions:

    cat >/etc/yum.repos.d/Instana-Product.repo <<EOF
    [instana-product]
    name=Instana-Product
    baseurl=https://_:<customer_agent_key>@packages.instana.io/release/product/rpm/generic/x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=1
    gpgkey=https://packages.instana.io/Instana.gpg
    sslverify=1
    
    #proxy=http://x.x.x.x:8080
    #proxy_username=
    #proxy_password=
    
    EOF
    
    yum makecache -y fast
    yum install -y instana-common
    
  2. Edit /etc/instana/settings.yaml by adding customer specific values (see Configuration for details).

  3. Run:

    instana-init

  4. Done - all the services should be up and running.

Configuration

The environment configuration where the backend packages get installed is: /etc/instana/settings.yaml. ( Red text = Required setting ).

---

user: # Required
  name: "John Doe" # (string) The name of the first user that's being added to the system with full privileges. For this user, the name is only a label.
  password: "10647cbb3b182dc9f295039c" # (string) The password of the first user to be added to the system. This password will be required to log into the Web-UI.
  email: "john.doe@acmecorp.com" # (string) The email address of the first user to be added to the system. This email address will be required to log into the Web-UI. Note: Until the installed and running backend has email delivery settings in place, this address wont receive any confirmation mails. Note: In combination with LDAP this email address should match a valid LDAP user's email to login with owner access.
config: # Required
  keys: # Required
    agent: "12345678-dead-beef-abcd-12456789abc" # (string) Your assigned agent key. This key will be the identifier under which the acceptor package will accept traffic from sending instana agents.
    sales: "12345678-dead-beef-abcd-12456789abc" # (string) Your assigned sales key. This key identifies you and your Trial-/Customer-Deals so you can run multiple Backends independently.
  tenant: # Required
    name: "acmecorp" # (string) Your assigned tenancy identifier
    unit: "acmecorp" # (string) Your assigned backend identifier. As described above, multiple backends per tenant are possible.
  hostname: "instana-backend.acmecorp.com" # Required # (string) The hostname of the machine the backend is being installed on. Note: In order to have all the features we provide, please make sure the backend machine can resolve this hostname itself.
  retention: # Please note that a value of zero tells the system to not drop rollups of this time span. A zero value for smaller rollups can cause the disks to fill up pretty quickly.
    rollup5: 86400 # (int) Seconds of retention for how long 5-second-rollups are being held available.
    rollup60: 2678400 # (int) Seconds of retention for how long 1-minute-rollups are being held available.
    rollup300: 8035200 # (int) Seconds of retention for how long 5-minute-rollups are being held available.
    rollup3600: 0 # (int) Seconds of retention for how long 1-hour-rollups are being held available.
    traces: 604800 # (int) Seconds of retention for traces being held available. (Added with release 125)
  cert: # Required
    crt: "/etc/instana/server.crt" # (string) Either a self-signed or publicly signed certificate's crt-file, issued for the backend instance's hostname.
    key: "/etc/instana/server.key" # (string) Either a self-signed or publicly signed certificate's key-file (see above).
  dir: # Required
    cassandra: "/mnt/data" # (string) The parent directory of where cassandra will create its data directory. We recommend to put this on a fast device.
    data: "/mnt/data" # (string) The parent directory of where our own and third party components will store data.
    logs: "/mnt/logs" # (string) The parent directory of where all the components in our stack will create logfiles.
  heap:
    kafka: "3G" # (string) Virtual memory size assigned for the Kafka service. |
    elastic: "3G" # (string) Virtual memory size assigned for the Elasticsearch service. |
    cassandra: "3G" # (string) Virtual memory size assigned for the Cassandra service. |
    acceptor: "2G" # (string) Virtual memory size assigned for the Acceptor service. |
    cashier: "1G" # (string) Virtual memory size assigned for the Cashier service. |
    filler: "3G" # (string) Virtual memory size assigned for the Filler service. |
    groundskeeper: "1G" # (string) Virtual memory size assigned for the Groundskeeper service. |
    issuetracker: "2G" # (string) Virtual memory size assigned for the Issuetracker service. |
    processor: "2G" # (string) Virtual memory size assigned for the Processor service. |
    uibackend: "1G" # (string) Virtual memory size assigned for the UI-Backend service. |
    butler: "1G" # (string) Virtual memory size assigned for the Butler service. |
  mail:
   from: "ops-notifications@acmecorp.com" # (string) This is the email address which will be used as the sender from our notification emails.
   host: "relay-1.acme.internal" # (string) The address or hostname of the SMTP server, that should send our notification emails.
   port: 25 # (int) The port of the SMTP server that should send our notification emails.
   user: "ops-notifications@acmecorp.com" # (string) The SMTP username of the SMTP server that should send our notification emails.
   password: "yUnoEm41l" # (string) The SMTP user's password of the SMTP server that should send our notification emails.
   usessl: true # (bool) Dictates if whether the SMTP server that should send our notification emails shall be spoken to via SSL.
   starttls: false #  (bool) Dictates if whether the SMTP server that should send our notification emails shall be spoken to via SSL.
   token: "Ju~B_JK]O=U1" # (string) A string of exactly twelve characters, which is being used as a hash to generate links which are being used by our notification emails.
  ses:
    from: "ops-notifications@acmecorp.com" # (string) This is the email address which will be used as the sender from our notification emails.
    aws_access_key_id: "AKIAIOSFODNN7EXAMPLE" # (string) In case you want to send notification emails via the AWS SES service, this is the access key id to use.
    aws_secret_access_key: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" # (string) The secret key to use (see above).
    aws_region: "us-west-2" # (string) The AWS Region to send the SES emails from.
    return_path: "notifications@acme.corp" # (string) The return path for AWS - see the SES/SNS bounces/complaints documentation.
  proxy:
    host: "10.9.4.13" # (string) If your backend server should speak to the cluster-external internet via a proxy server, this is either the address or hostname.
    port: 8080 # (int) Port for the proxy server to speak to.
    user: "anonymous" # (int) Username of the proxy server.
    port: "P4ssword1234!" # (string) Password of the proxy server.
  auto_create_users: false # (bool) If whether the backend should auto-create default instana settings for all users with a successful LDAP login in the background. (Added with release 128)
  ldap: # (These settings were added with release 124 & backward compatible refactored with release 139)
    url: "ldaps://dir-ops.acme.internal:636" # (string) LDAP Server URL (ldap://host:389 or ldaps://host:636)
    base: "(dc=instana,dc=com)" # (string) A base for every query
    group_query: "(ou=Instana)" # (string) The query to list a group or a set of groups of members having acceess to Instana
    group_member_field: "user_dn" # (string) Name of the field containing DNs of users listed through group_query.
    user_dn_mapping: "sAMAccountName" # (string) The field which contains the loginname that is entered in the Instana UI.
    user_field: "uid" # (string) The field which defines group membership if not related via DN
    user_query_template: "(uid=%s)" # (string) Template to query the user
    email_field: "mail" # (string) The name of the field where to find the email address
    ro_user: "instanaconnect" # (string) User for initial LDAP bind. It needs to have sufficient rights to list groups through group_query.
    ro_password: "MGQ5NTVkYjlhNmE" # (string) Password for initial LDAP bind
  o_auth: # (These settings were added with release 133)
    google:
      client_id: '1234567890-1n574n4abcdefghijklmnop.apps.googleusercontent.com' # (string) a google oauth credential client id
      client_secret: 'XNLV-fpf_deadBeEf1234' # (string) a google oauth credential client secret password
  #artifact_repository: # Configure maven mirror for agent downloads, do not uncomment unless a local mirror exists (Added with release 136)
  #  base_url: https://artifact-public.instana.io/artifactory/shared/
  #  username:
  #  password:

Users and Groups Required

If local system users and groups cannot be created for instana services, you must add the following users and groups to your directory system:

  • cassandra:cassandra
  • elasticsearch:elasticsearch
  • kafka:kafka
  • mongodb:mongodb
  • postgres:postgres
  • redis:redis
  • zookeeper:zookeeper
  • instana-acceptor:instana-acceptor
  • instana-butler:instana-butler
  • instana-cashier:instana-cashier
  • instana-eum-acceptor:instana-eum-acceptor
  • instana-filler:instana-filler
  • instana-groundskeeper:instana-groundskeeper
  • instana-issue-tracker:instana-issue-tracker
  • instana-processor:instana-processor
  • instana-ui-backend:instana-ui-backend
  • instana-ui-client:instana-ui-client

Package Based Release Notes

Release 140 will be the last classic Instana version. With Release 143 we added Application Perspectives, while 141 and 142 are skipped for on-prem.

Note: 138 and beyond

As of build 138, package-based release notes can be found directly on the corresponding release notes page.

137

Debian Users: Debian 9 is now the minimum supported Debian version. In order to upgrade to release 137 from Debian 8:

  • Upgrade the Instana backend system to Debian 9
  • apt-get purge nginx
  • Run instana-update

136

We have added the ability to configure a local maven mirror for portal based agent downloads:

config:
  artifact_repository:
    base_url: https://your.local.maven.uri/artifactory/shared/
    username: # your local maven username
    password: # your local maven password

Do not configure this section unless you have already deployed a local maven mirror of the instana agent artifacts.

133

We now support authenticating your on premises installation with single sign on against oauth credentials backed by Google:

config:
  o_auth: # (These settings were added with release 133)
    google:
      client_id: # a google oauth credential client id
      client_secret: # a google oauth credential client secret password

Please note that you need to enable the SSO domain in the UMP, you can read more about this here.

130

Within this release, we released a rewrite of the End User Monitoring component (previously named instana-eumtracer) which is now called instana-eum-acceptor. instana-debug and other components have been adjusted to work with this component as well, so you’ll find the logs in the same directory as every other component (default /mnt/logs).

  • Package name: instana-eum-acceptor
  • Service name: instana-eum-acceptor
  • System user name: instana-eum-acceptor
  • System group name: instana-eum-acceptor
  • Default home directory: /srv/instana-eum-acceptor

If you’re using a directory system that doesnt permit renaming or adding/removing system users, please make sure you rename the instana-eum-tracer group and user accordingly.

129

In release 129 we brought two optional settings to the /etc/instana/settings.yaml file in case you’re using Amazons’ SES service to send notification emails without backend:

config:
  mail:
    aws_region: "us-west-2" # (string) The region to send SES notification emails from
    return_path: "bounces@acme-corp.com" # (string) SES return path for notification emails

128

In the process of dividing data types in our Elasticsearch setup, we have added a migration for this release. In updates from older versions to 128, instana-update will stop with a message asking to run the migration command:

instana-migrate-128

After this has been run, instana-update will upgrade to the new release.

Also, for customers using an LDAP backed on prem setup, we have added another option for the /etc/instana/settings.yaml file:

config:
  auto_create_users: false # (bool) Automatically create users on successful LDAP login

This option will, after a successful LDAP authentication, create the authenticated user in Instana’s Backend database without having to fill out any forms in the user administration panel. The YAML position of this setting is besides ‘ldap.’

125

Support has been added for configurable trace retention. See the new /etc/instana/settings.yaml.template for the default value.

config:
  retention:
    traces: 604800

124

For customers utilizing ldap authentication, please ensure the settings.yaml file as been updated to include the customer ldap settings, prior to running instana-update:

config:
  ldap:
    url:
    base:
    group_query:
    user_query_template:
    email_field:
    user_field:
    user_template:
    dn_field:
    ro_user:
    ro_password:

A fix was added to resolve incidents that should have been closed. The issue tracker should fix these itself, however if a customer is experiencing issues due to a large number of events, a script has been provided to migrate these out of band: instana-migrate-124.

122

After running an update to version 122, the events must be migrated to the new naming scheme. This can be done by just running:

instana-migrate-122

This will stop the issue tracker, migrate the events to the new naming scheme, and start it again. NOTE: This is only required for customers that are being upgraded, not fresh installs.

120-16

Due to a configuration error, the cassandra cluster name must be updated before running instana-update:

  • Edit /etc/cassandra/cassandra.yaml OR /etc/cassandra/conf/cassandra.yaml by setting clustername to “onprem”.

    export JAVA_HOME=/opt/instana/jre
    cqlsh -e "UPDATE system.local SET cluster_name = 'onprem' where key='local';"
    
    nodetool flush system
    systemctl restart cassandra
    
  • Ensure cassandra is running and accepting connections before running instana-update.

Artifacts (packages/images) are incorrectly versioned at 121. This will be resolved in the actual 121 release.