Honeycomb Streaming Output and Integration

George Alpizar
George Alpizar
  • Updated

Overview

The Honeycomb output will stream analytics and insights to a Honeycomb environment.

Before you begin

Before you can create an output, you must have available a Honeycomb API key.


Review Parameters

Review the following parameters that you can configure in the Edge Delta App:

YAML Description
name

Enter a descriptive name for the output or integration.

For outputs, this name will be used to map this destination to a workflow.

This parameter is required. 

Review the following example:

name: honeycomb
integration_name

This parameter only appears when you create an individual output.

This parameter refers to the organization-level integration created in the Integrations page. If you enter this name, then the rest of the fields will be automatically populated.

If you need to add multiple instances of the same integration into the config, then you can add a custom name to each instance via the name field. In this situation, the name should be used to refer to the specific instance of the destination in the workflows.

This parameter is optional. 

Review the following example:

integration_name: orgs-honeycomb
type

Enter honeycomb.

This parameter is required. 

Review the following example:

type: honeycomb
host

Enter a name to create a custom installation of the Honeycomb host that will receive the data.

The default value is api.honeycomb.io.

This parameter is optional. 

Review the following example:

host: api.honeycomb.io
api_key

Enter the Honeycomb API key.

This parameter is required. 

Review the following example:

api_key: '{{ Env "TEST_HC_APIKEY" }}'
dataset_name

Enter a name to create a dataset that will send data to Honeycomb.

At Honeycomb, datasets are high-level buckets for your data.

This parameter is required. 

Review the following example:

dataset_name: "<ADD-DATASET_NAME>"
unpacking

Enter true or false to unpack  and flatten nested JSON objects into unique columns. 

If you do not want to flatten nested fields, then enter false.

This parameter is optional. 

Review the following example:

unpacking: false
custom_tags

This parameter defines key-value pairs that are streamed with every request.

This parameter is optional. 

Review the following example:

custom_tags:
  "app": "starbucks_pos_transaction_manager"
  "region": "us-west-2"
  "File Path": "{{.FileGlobPath}}"
  "K8s PodName": "{{.K8sPodName}}"
  "K8s Namespace": "{{.K8sNamespace}}"
  "K8s ControllerKind": "{{.K8sControllerKind}}"
  "K8s ContainerName": "{{.K8sContainerName}}"
  "K8s ContainerImage": "{{.K8sContainerImage}}"
  "K8s ControllerLogicalName": "{{.K8sControllerLogicalName}}"
  "ECSCluster": "{{.ECSCluster}}"
  "ECSContainerName": "{{.ECSContainerName}}"
  "ECSTaskVersion": "{{.ECSTaskVersion}}"
  "ECSTaskFamily": "{{.ECSTaskFamily}}"
  "DockerContainerName": "{{.DockerContainerName}}"
  "ConfigID": "{{.ConfigID}}"
  "Host": "{{.Host}}"
  "Source": "{{.Source}}"
  "SourceType": "{{.SourceType}}"
  "Tag": "{{.Tag}}"
features

This parameter defines which data types to stream to the destination.

If you do not provide a value, then all will be set.

To learn more, review the Review Feature Types section in Stream Outputs and Integrations Overview.

This parameter is optional. 

Review the following example:

features: metric,log,edac
buffer_ttl

Enter a length of time to retry failed streaming data.

After this length of time is reached, the failed streaming data will no longer be tried.

This parameter is optional. 

Review the following example: 

buffer_ttl: 2h
buffer_path

Enter a folder path to temporarily store failed streaming data.

The failed streaming data will be retried until the data reaches its destinations or until the Buffer TTL value is reached.

If you enter a path that does not exist, then the agent will create directories, as needed.

This parameter is optional. 

Review the following example: 

buffer_path: /var/log/edgedelta/pushbuffer/
buffer_max_bytesize

Enter the maximum size of failed streaming data that you want to retry.

If the failed streaming data is larger than this size, then the failed streaming data will not be retried.

This parameter is optional. 

Review the following example: 

buffer_max_bytesize: 100MB

Review Sample Configuration

The following sample configuration displays an output without the name of the organization-level integration:

    - name: honeycomb
      type: honeycomb
      host: "<ADD HONEYCOMB HOST>" 
      api_key: {{Env "TEST_HC_APIKEY"}}
      dataset_name: "<ADD-DATASET_NAME>"
      unpacking: false
      features: metric,log,edac
      custom_tags:
        "app": "starbucks_pos_transaction_manager"
        "region": "us-west-2"
        "File Path": "{{.FileGlobPath}}"
        "K8s PodName": "{{.K8sPodName}}"
        "K8s Namespace": "{{.K8sNamespace}}"
        "K8s ControllerKind": "{{.K8sControllerKind}}"
        "K8s ContainerName": "{{.K8sContainerName}}"
        "K8s ContainerImage": "{{.K8sContainerImage}}"
        "K8s ControllerLogicalName": "{{.K8sControllerLogicalName}}"
        "ECSCluster": "{{.ECSCluster}}"
        "ECSContainerName": "{{.ECSContainerName}}"
        "ECSTaskVersion": "{{.ECSTaskVersion}}"
        "ECSTaskFamily": "{{.ECSTaskFamily}}"
        "DockerContainerName": "{{.DockerContainerName}}"
        "ConfigID": "{{.ConfigID}}"
        "Host": "{{.Host}}"
        "Source": "{{.Source}}"
        "SourceType": "{{.SourceType}}"
        "Tag": "{{.Tag}}"

 

 

Share this document