SignalFx

The SignalFx Developer Hub

Welcome to the SignalFx developer hub. You'll find comprehensive guides and documentation to help you start working with SignalFx as quickly as possible, as well as support if you get stuck. Let's jump right in!

Get Started    API Docs

SignalFx API Overview

Getting Started

SignalFx exposes a comprehensive API that allows you to automate any action that can be done using the SignalFx User Interface. Using this HTTP-based API, you can:

  • Manage your organization and users
  • Send monitoring data to SignalFx and manage the metadata associated with it
  • Create, update and delete charts, dashboards and dashboard groups
  • Create, update and delete detectors, subscriptions to alerts and notification messages
  • Process your data through the SignalFx streaming analytics engine
  • Stream both raw and processed data back from the SignalFx service

While sending data to SignalFx is probably the most common use of the API, customers also use the API to automate bulk actions that would take too long to manually duplicate through the user interface - such as DevOps setting up the standard dashboards and detectors associated with a new datacenter or team within an organization. It also allows users to stream charts in real-time to other non-SignalFx dashboards within the organization. You can create detectors that identify anomalies or patterns in your data and receive alerts so that you can programmatically take action in response to them.

You can access the API directly by issuing HTTP requests to the appropriate endpoint or via our client lIbraries. All API calls, with the exception of the call to establish a session, are authenticated. When making an API call, set the X-SF-Token header to a valid access token (see Authentication Overview) to authenticate the request.

SignalFx API calls can be made against 3 endpoints:

  • Calls sending current data must be directed to https://ingest.signalfx.com
  • Calls for sending historical data must be directed to https://backfill.signalfx.com/v1/backfill
  • All other calls are made to https://api.signalfx.com

SignalFx Metric Data Model

Metric data are numerical measurements from your systems that are continuously retrieved to gain an up-to-date view of the performance and health of your environment. These metrics can be categorized into three basic categories:

  • Infrastructure metrics are about the machinery and resources which applications rely on, such as cpu, memory, disk usage, network utilization statistics.

  • Application metrics are metrics related to 3rd party applications/services that run in the data center such as databases, web servers and message queues. Some organizations consider these applications/services as part of their infrastructure.

    This category also includes metrics about applications written by engineers within your organization to profile the application, such as the amount of time an internal function call takes to return a value, or the number of times a method is called.

  • Business metrics describe high-level business metrics, such as the number of sales made or active users in the system, or the number of calls made to an external service.

The underlying data model for each of the types of metrics above is the same. The only difference is what the data represents and consequently the types of trends, anomalies and correlations that you may wish to look for in the data.

SignalFx provides out-of-the-box functionality to capture common infrastructure metrics, along with APIs and libraries to send custom application and business metrics. Our data model is designed to help gain insights into the behavior of not just individual metrics but to correlate across them to compute key performance indicators and gain an overall picture of the health of your business.

Anatomy of a datapoint

Every datapoint comprises the following:

  • a metric name
  • the metric type - gauge, counter, cumulative_counter
  • an arbitrarily long array of dimensions (name/value pairs) that help identify the datapoint,
    the value of the metric
  • an optional timestamp at which this value was recorded (if no timestamp is provided, the time at which the SignalFx server receives the time is used by default)

Here's an example of a metric datapoint request to SignalFx using HTTP which you can run in your command-line:

# Requires an org token as the access token
# Send a datapoint via HTTP POST request using cURL
curl --request POST \
  --header "Content-Type: application/json" \
   --header "X-SF-TOKEN: YOUR_ORG_ACCESS_TOKEN" \
  --data \
  '{ "gauge": [{
       "metric": "memory.free",
       "dimensions": { "host": "server1" },
       "value": 42
  }]}' \
  https://ingest.signalfx.com/v2/datapoint

The above request sends the amount of free memory on server1 - the data is sent as a gauge, the metric name is memory.free and the value is 42. The name of the server from which the data was collected is sent as a dimension called host. Additional information such as the IP address of the server, the fully qualified domain name of the server, or the environment (production, staging or test) could also have been sent as dimensions as part of this datapoint. The purpose of the dimensions is to provide enough information about the datapoint to enable classification, aggregation and filtering of the data. Dimensions are also used to help correlate different but related datapoints, which is critical in identifying patterns.

For more information, see Metric Data Overview.

The combination of the metric name and the unique dimensions (name and value) create a unique metric time series for the data. For example, without the host dimension above, it would be impossible to distinguish the free memory between two servers. However, if each server sent its name as part of the host dimension, SignalFx would create a unique timeseries for the free memory of each server e.g. server1, server2, etc.

Each time a unique dimension is sent for a specific metric that has never been sent before, a new timeseries is automatically created.

Dimensions are sent along with the data. However, the data can later be extended by adding custom properties that can be used when filtering or grouping the data. Properties, however, do not maintain history in the way that dimensions do. When a dimension value is changed for example, it will result in the creation of a new metric time series, while the previous time series continue to live in the system. By contrast, a property retains only its most recent value; see Metrics Metadata Overview for more details.

Automating tasks in SignalFx

Here is an example that shows how to create a detector using the API.

curl \
  --request POST \
  --header "X-SF-TOKEN: YOUR_ACCESS_TOKEN" \
  --header "Content-Type: application/json" \
  --data \
  '{
    "name": "CPU load too high",
    "programText": "detect(data(\'jvm.cpu.load\') > 50).publish(\'Load above 50%\')",
    "rules": [{ 
      "severity": "Critical", 
      "detectLabel": "Load above 50%", 
      "notifications": [{ "type": "Email", "email": "person@example.com" }]
    }]
  }' \
  https://api.signalfx.com/v2/detector

In the above example, a detector named “CPU load too high” is created, which is monitoring for condition when the metric vm.cpu.load is greater than 50. That condition results in an alert of Critical severity and notifies person@example.com via email. Note that the call is directed to api.signalfx.com.

The above are examples on how the SignalFx API is invoked. More details on each of the supported API calls are available in their respective sections. Details about the concepts behind SignalFx and the functionality available in the application can be found here.

Client Libraries Overview

SignalFx offers client libraries for many languages including Java, Python, Node.js, Ruby, and Go. These libraries allow you to interact with the SignalFx API directly from your application code or scripts. Information on each library is available in its GitHub repository; see links below.

SignalFx API Overview