Application & Service Management


Traditional Application Performance Management (APM) solutions are about managing the performance and availability of applications.

An application for APM tools is a static set of code runtimes (e.g. JVM or CLR) that are monitored using an agent. Normally the application is defined as a configuration parameter on each agent.

This concept, which was a good model for classical 3-tier applications, does not work anymore in modern (micro)service applications. A service does not always belong to exactly one application. Think of a credit card payment service that is used in the online store of a company and also in their Point of Sales solution. A solution to this problem could be to define every service as an application, but that would introduce some new issues:

  • Too many applications to monitor. Treating every service as an application would result in hundreds or thousands of applications. Monitoring them using dashboards will not work - just too much data for humans.
  • Loss of context. As every service is treated separately, it would not be possible to understand dependency and understand the role of the service in the context of a problem.

Instana introduces the next generation of APM with its application hierarchy of services, endpoints, and application perspectives across them. Our main goal is to simplify the monitoring of your business’ service quality. Based on the data we collect from traces and component sensors, we discover your application landscape directly from the services actually being implemented. Based upon the data collected, we can know the health of every individual service, and, subsequently, the health of the entire application landscape.

Application Perspectives

The term “application” is ambiguous, and different teams may use it to describe distinctly different things. Instead of applications, Instana provides application perspectives, in order to allow teams to capture the type of semantics that are meaningful to them.

A team that runs a multi-tenant offering likely wants to capture their tenants (tenant:one, tenant:two) as applications. Another team that runs an eCommerce site may want to capture their US (zone:us) and German (zone:eu) stores as two separate applications. While another team will want to capture their different environments (k8s.env:prod, k8s.env:dev) as applications. And finally one customer may want to group a set of services that work together functionally into an application. Each of these applications come with their own semantics and varying use cases.

Different Application Perspectives

An application perspective may overlay entire services, or just a subset of calls to those services. Application perspectives may overlap each other completely, partially, or not at all. Services and endpoints may be included within the definition of more than one application, or none at all.

Working with Application Perspectives

Applications are fundamentally groups of calls, and which calls match depends on tags. To create a new application perspective simply go to the application perspective inventory and select ”+ Create Application Perspective”. In the dialog select one or more tags which make up your application perspective. Every call that matches these conditions is associated to this application, as well as calls to any database or messaging services invoked within the application.

Application Creation Dialog, tags and infrastructure entity tags such as agent.tag or can be applied to both the source and the destination of a call. By default it is applied to the destination but you can change it to the source in the edit filter dialog. By combining source and destination, you can define for example an application which groups calls that are issued from the ‘prod’ towards the ‘test’.

Tags can be conjoined with operators. Users can choose between AND or OR for each tag conjoin with the following tag. When using a mix of AND and OR conjunctions, AND conjunctions have a higher bounding then OR conjunctions. For example the tag definitions tag.A AND tag.B OR tag.c AND tag.d, Instana will interpret and process it like (tag.A AND tag.B) OR (tag.c AND tag.d).

To update or delete an Application Perspective, select the “Configuration” tab. When an Application Perspective is deleted, it’s still listed in the inventory as long as it existed during the selected time window.

In terms of KPIs and alerting, there are two different methods of interpreting an application;

  • Inbound Calls mode: The calculation for KPIs and the respective alerts are based only on the calls initiated from outside the application, and the destination of the service is part of the selected application perspective. This enables you to track the application perspective the way it’s consumers perceive it.
  • All Calls mode: The KPIs calculation is based on each call made to the application; whether that’s on the application boundary or deep inside the application. This mode is a holistic view of the application.

When browsing an Application Perspective, you can switch between Inbound Calls and All Calls at any time, and the information displayed adjusts accordingly (except for the Service Dependency map, at the moment).

Within the Application Perspective’s “Configuration” tab, you can also configure a default mode that applies to detecting issues via built-in events. If no default is specified, built-in events rely on the KPIs for All Calls, and this is for reasons of backwards compatibility with Application Perspectives created before the introduction of the two modes. Updating the configuration of an existing Application Perspective to specify a default mode, automatically adjusts the behaviour of the built-in events to rely on the default mode.

Showing Inbound or All Calls within Application Perspectives

Wherever the Inbound or All Calls selector is available on the dashboard, the perspective can be changed from the default. When selecting Services and/or Endpoints within an Application Perspective, the boundary setting is inherited.

Application Boundary Call Selection


A service can be seen as a logical component that provides a public API to the rest of the system, in which the API is made up of its endpoints. A service is being monitored and makes and receives calls. A request to a service results in a single call to a particular endpoint.

Services can be considered in isolation or through the lens of an application perspective. Services will often map to one ‘unit of deployment’, such as a package or container. If multiple instances of this e.g. container operate at the same time, they will all map to the same logical service.

Service types are assigned automatically through inheritance from endpoints. For example, if a service has both HTTP and BATCH endpoints, then it is assigned both HTTP and BATCH types. KPIs (Calls, Erroneous Calls, Latency) for services display an aggregate of all calls, regardless of type.

Working with Services

Instana will automatically map services based on default rules. These rules examine call and infrastructure data and create a service as soon as it’s recognized. Additionally, teams can add a custom rule. See Service Mapping below for details.


Endpoints define the API of a service, and allow a fine-grained view into what operations the services exposes. Every call to a service happens on a single endpoint. And every endpoint has a single, automatically discovered type: BATCH, SHELL, DATABASE, HTTP, MESSAGING, RPC. Like services, endpoints can also be viewed through the lens of an application perspective.

The endpoint can be statically declared or based on call tags (in contrast to the service, which is typically determined using infrastructure tags, e.g. SpringBoot name). For example a combination of {call.http.method} {call.http.path} would be a typical endpoint of a http service.

Another benefit for defining separate endpoints for a service is that, mainly with ‘monoliths’, services can include many different frameworks and technologies; HTTP/REST APIs, database access, message bus integration, etc. By creating at least endpoints per type of API, and possibly further breaking apart such as protocol details, metrics and statistics can be captured per type of API.

Working with Endpoints

Instana will automatically map endpoints based on default rules. See Endpoint Mapping below for details.

Endpoint Grouping - Path Templates

Instana is smart. For many frameworks, Instana will automatically group endpoints based on their REST fingerprint. As an example, endpoints such as the following (which are being served by the same application code and hence have a shared performance profile):


…will be grouped and reported together as:


This is done automatically and no user steps are required for known frameworks.

If you are writing your own tracing code (e.g. with OpenTracing), see this section on how you can achieve the same results.

Synthetic endpoints

Synthetic endpoints (e.g. health checks) represent simulated traffic which do not count towards the KPIs of your applications and services.

Synthetic endpoints

KPIs of individual synthetic endpoints are still available.

Synthetic endpoint

In the Analyze view, search is done by default on non-synthetic calls, but this can be overridden using the call.is_synthetic tag.

Synthetic endpoints in Analyze

See this section on how to configure synthetic endpoints.

The “Unspecified” Endpoint

When there is insufficient information to automatically map a call to an endpoint (eg. manually instrumented endpoints without providing sufficient data), those calls are mapped to the special “Unspecified” endpoint.

The “Others” Endpoint

When too many endpoints are detected on a given service, calls are grouped under the special “Others” endpoint. This safeguard is meant to keep the set of endpoints to a reasonable size.

It usually happens when one of the Instana predefined rules or a custom rule is grouping calls into a large number of endpoints which received a single call each.

For example, one of the Instana predefined rules is mapping HTTP calls to endpoints whose names come from the first path segment found in the URL. If those segments are built from dynamic values (e.g. “GET /blue-item”, “GET /red-item”…), applying this rule would lead to a large number of endpoints, which is not useful.

When dealing with HTTP endpoints, the situation may be improved by changing the configuration of endpoint extraction rules.


Service Mapping

Services are an integral part of APM and show a logical view on your system. Services are derived from infrastructure components like hosts, containers, and processes. The act of assigning specific infrastructure components like containers to one or more services is referred to as service mapping.

In Instana we automatically map all services based on an extensive set of predefined rules. This way an Instana user in most cases doesn’t need to do anything. Services are automatically discovered and dashboards, metrics, and rules are made available.

Customizing Service Mapping

There are four ways of customizing the default service mapping: via a custom service rule, using the tag, specifying the INSTANA_SERVICE_NAME environment variable or specifying the HTTP header X-Instana-Service. Note that the custom service rule precedes all other rules in priority, even if services are configured specifically using e.g. the INSTANA_SERVICE_NAME environment variable.

Custom Service Rule

A good example for using a custom service rule is leveraging existing meta information on infrastructure components. For example, a user is labeling its Docker containers with domain specific information like com.acme.service-name:myservice. To map services from this label, click on “Custom Service Rule” in the service configuration and add the key: docker.label plus com.acme.service-name. All calls that now pass a container with some label com.acme.service-name are associated with a service which is named by that value, e.g. myservice. There are many other tags available to create custom service mapping.

You can also add multiple keys for service mapping. Multiple tags will be concatenated and separated with a dash. Note that all keys need to match. E.g., if you want to separate your staging services based on the host zone you could add two keys: + docker.label:com.acme.service-name. Your services would then be named with a concatenation of the values for the host zone followed by the docker label. That way you would separate out the two services e.g. prod-myservice and dev-myservice.

Service Configuration Dialog

By using a special tag service.default_name, a custom service rule can also be used to extend service default rules with additional tags. E.g., if you want to split the automatically created services by host zones, then you can create the custom service rule using the tags service.default_name and

Using the call tag

A user can also associate very specific calls by annotating these with service which enables very tailored mappings within the users code. The service annotation will be turned into the tag for searching / analyzing calls.

There are several options for adding call annotations:

For more information regarding usage of the Java Trace SDK and naming of services, also see the respective Conversion and Naming section.

Specifying the INSTANA_SERVICE_NAME environment variable

By setting the INSTANA_SERVICE_NAME environment variable on a process, the value of the environment variable will be used as service name for all calls that have as destination that process.

For more information on environment variables recognized by Instana, have a look at the General Reference - Environment Variables page.

Specifying the X-Instana-Service HTTP header

By setting the X-Instana-Service HTTP header on a call, the destination (service receiving the HTTP request) will be tagged with the value provided in the header.

NOTICE: The X-Instana-Service HTTP header is not automatically collected. For it to work, the Instana agent must be configured accordingly, see “Capturing custom HTTP headers”.

Predefined Rules

The following rules are considered in the order top to bottom. When a rule matches, the respective service is created.

Rule Tags
User defined:
Custom service rule Tags defined by the user (see custom service rule)
Use call tag {}
Use HTTP header X-Instana-Service {call.http.header.X-Instana-Service}
Trace SDK service name {call.tag.service}
Jaeger service name {call.tag.jaeger.service}
Zipkin service name {call.tag.zipkin.service}
Infrastructure based:
Consul cluster name Consul@{}
Cassandra cluster name {}
Elasticsearch cluster name {}
Couchbase cluster name {}
MongoDB replica set name {mongo.replicaSetName}
Kafka cluster name {}
AWS ECS container name {}
AWS ECS task family {}
Kubernetes container name {}
Cloud Foundry application name {}
Docker Swarm service name {}
Marathon application id {}
Nomad task name {}
Rancher 1 project service name {}
Container image name {}
JBoss / Tomcat deployment name (parsed) {}
Dropwizard name {}
Spring Boot name {}
JBoss name {} {}
WebLogic name {}
WebSphere name {}
Redis port Redis@{redis.port} on {}
Neo4j port Neo4j@{neo4j.port} on {}
Memcached port Memcached@{memcached.port} on {}
Varnish port Varnish@{varnish.port} on {}
ClickHouse port ClickHouse@{clickhouse.httpPort} on {}
MongoDB database name MongoDB@{mongo.port} on {}
Zookeeper port Zookeeper@{zookeeper.clientPort} on {}
Solr version Solr@{solr.version} on {}
Solr Solr on {}
PostgreSQL port PostgreSQL@{postgresql.port} on {}
CockroachDB port CockroachDB@{cockroachdb.port} on {}
MySQL port MySQL@{mysql.port} on {}
OracleDB port OracleDB@{oracledb.port} on {}
MSSQL database sid MSSQL@{mssql.instance}
MariaDB port MariaDB@{mariadb.port} on {}
Kafka version Kafka@{kafka.version} on {}
ActiveMQ broker name {}
RabbitMQ version RabbitMQ@{rabbitmq.version} on {}
JVM name (parsed) {}
JVM name {}
Node.js application with host environment {}
Python snapshot name {}
Ruby name {}
Go name {}
PHP using host header (parsed) when available {}
PHP / PHP-FPM worker pool PHP
CLR name {}
.Net Core name {}
Crystal name {}
Haskell name {}
Call based:
Shell Span Spawned processes
Span RPC service using object with WSDL namespaces {call.rpc.object-1}
Span RPC service using object {call.rpc.object-1}
Span HTTP service with host {}
Span HTTP service parsing URL {call.http.url-1}
Span Database FTP service {call.database.connection}
MongoDB database name, using span data {call.database.schema-1}
Elasticsearch database name, always use connection from span data {call.database.connection-1}
Couchbase host name from span data {call.database.connection-1}
Span Database schema {call.database.schema}
Span Database connection, extract schema from URI when schema empty {call.database.connection-1}
Span Database service using host {call.database.connection-1}
Span Database type (when connection and schema empty just show the type) {call.database.type}
Span Messaging service using address {call.messaging.address-1}
Span Messaging service using address {call.messaging.address}
Span Messaging type (when address is empty just show the type) {call.messaging.type}
Span SDK name SDK
Span RMI name for RPC-RMI spans RMI

Endpoint Mapping

Endpoints are automatically mapped based on the endpoint type.

For example, HTTP endpoints map automatically based on path template if accessible. When the path template is not available, the endpoints are mapped to the first path segment or can be configured as desired

Customizing Endpoint Mapping

For each service, Instana allows to have one configuration which controls how the services endpoints are getting extracted. To access the configuration, navigate to a services dashboard, go to the endpoints tab and hit “Configure Endpoints”.

Application Dependency Map Overview

Note that custom endpoint configurations are available for http based services only.

Custom Endpoint Rules

Instana comes with three default HTTP rules which are always part of the extraction chain:

  • Detected Framework: Extracts endpoints as specified in detected framework (if accessible)
  • /*: Extracts endpoints based on the first path parameter
  • Unspecified: Calls that do not match a preceding rule are assigned to this endpoint

To specify additional configuration, you can define multiple custom rules. To do so, access the custom endpoint configuration dialog from any HTTP service and hit “Add Custom HTTP Rule”. The upcoming dialog offers you the ability to define a specific path (the actual rule) and multiple test cases to check if the defined path is working as expected. For the path you can use static segments like /api or /myShopEndpoint, path parameters like /{productId}, or match any segment with /*.

In the rule tester part of the dialog, you can define multiple test cases to validate rules. For example given a query /api/*/{version}, the following test case /api/anyName/123 will match, while /otherApi/anyName/123 will not.

Application Dependency Map Overview

Sequential Evaluation

Rules are applied from top to bottom, and calls are assigned to the first matching rule. The change sequence, simply reorder rules but just drag & drop. Instana default rules can be disabled, but not reordered.

Application Dependency Map Overview

Synthetic endpoints

Endpoints which receive only synthetic traffic (e.g. calls to health-check endpoints) can skew the KPIs of your applications and services.

Instana is able to auto-detect these endpoints, and prevent them from affecting application and service KPIs.

Additionally, custom rules can be added to flag additional endpoints. Rules apply to globally (across all services), and can be accessed when clicking on the “Configure Services” button on top of the services list.

The built-in rule can be disabled, and custom rules can be added, disabled, and removed. To aid in configuration, the affected endpoints and services are displayed.

Synthetic endpoints configuration

Application Dependency Map

The dependency map is available for each application and provides:

  • an overview of the service dependencies within your application.
  • a visual representation of calls between services to understand communication paths and throughput.
  • different layouts to quickly gain an understanding of the application’s architecture.
  • easy access to service views (dashboards, flows, calls and issues).

Application Dependency Map Overview


Infrastructure Issues & Changes

Infrastructure issues and changes related to your applications, services or endpoints are shown on the respective dashboard “Summary” tab, to help you find correlations with interesting application metric changes, e.g., increase of “Erroneous Call Rate” or “Latency”.

Infrastructure Issues & Changes

To learn more about some specific issues or changes, select a desired time range on the chart and click on the “View Events” context menu item, which brings you to the “Events” view.

Error Messages

Error messages are all messages collected from errors happening during code execution of a service. For example, if an exception is thrown during processing and it is not caught and handled by the application code, this call together with the error message will be listed on the “Error Messages” tab. An example would be an unhandled exception in a Servlet’s doGet method that causes the request to be responded to with HTTP 500.

Log Messages

Log Messages are collected from instrumented logging libraries/frameworks (see for example the section “Logging” in the list of supported libraries). When a service writes a log message with severity WARN or higher via a logging library, the message will be displayed on the “Log Messages” tab. Additionally, captured log messages will also be shown in the trace details in the context of their trace. If a log message was written with severity ERROR or higher, it will be marked as an error. Note that log messages with a severity lower than WARN are not tracked.


From the Application Perspective view or Services dashboard it is possible to navigate to the corresponding infrastructure component shown on the Infrastructure Monitoring view.

The “Unmmonitored” Infrastructure Component

The list of infrastructure components for an application or service might sometimes show or include the “Unmonitored” host / container / process.

The “Unmonitored” component indicates that for some or all calls to this service, we were unable to link it to a specific infrastructure component. As Services are “logical” entities, we are often able to link it to infrastructure components via the monitored process. This does not hold for example for third-party web services, which we don’t monitor but where we still create Services and Endpoints based on host-name + path. Since no host or process is known, these services would be resulting in the “Unknown” infrastructure component being shown.