Install Kubernetes Agent (via kubectl)

George Alpizar
George Alpizar
  • Updated


You can use this document to learn how to install the Edge Delta agent as a DaemonSet on your Kubernetes cluster.

The agent is a daemon that analyzes logs and container metrics from a Kubernetes cluster, and then streams analytics to configured streaming destinations.

Edge Delta uses a Kubernetes-recommended, node-level logging architecture, also known as a DaemonSet architecture. The DaemonSet runs the agent pod on each node. Each agent pod analyzes logs from all other pods running on the same node.


Before you deploy the agent, we recommend that you review the Review Agent Requirements document. 


If you want to install the agent on your Kubernetes via Helm, then see Install the Agent for Kubernetes with Helm.

Step 1: Create a Configuration and Install the Agent

  1. In the Edge Delta App, on the left-side navigation, click Data Pipeline, and then click Agent Settings.
  2. Click Create Configuration.
  3. Select Kubernetes.
  4. Click Save.
  5. In the table, locate the newly created configuration, then click the corresponding vertical green ellipses, and then click Deploy Instructions.  
  6. Click Kubernetes.
  7. In the window that appears, follow the on-screen instructions.
    • This window also displays your API key.
    • For advanced users, there are additional installation steps that you can consider.

Step 2: Review Advanced Installation Instructions

For advanced users, review the following options to customize the installation process:

  1. Review the available agent manifest:
Manifest Description URL to Use in Command
Default This manifest is the default agent DaemonSet.
Persisting cursor This manifest is the agent DaemonSet with mounted host volumes to track file cursor positions persistently.
Metric exporter

This manifest is the agent DaemonSet that exposes port 6062 (metrics endpoint) in Prometheus format.
On premise This manifest is the agent DaemonSet for locally managed or offline deployments.
  1. Based on the desired manifest, create the DaemonSet with the corresponding manifest URL:
kubectl apply -f


For additional environment variables, you can download and edit

To learn more, review the Review Environment Variables for Agent Installation document, specially the Examples - Kubernetes (yml configuration) section.


For SELinux and Openshift users, see Special Considerations for SELinux and Openshift Users.


For custom Kubernetes deployments, you may need to update the mountPath to match the actual path of the container log folder.

For some Kubernetes distributions, /docker/containers is used, instead of the standard /var/lib/docker/containers. In these cases, you must update the the mountPath in the manifest file (edgedelta-agent.yml) to match the actual path of the container log folder. 

  • To help you identify this configuration, the agent manifest will contain a brief comment next to mountPath.


Run the Agent on Specific Nodes

To run the agent on specific nodes in your cluster, add a node selector or nodeAffinity section to your pod config file. For example, if the desired nodes are labeled as logging=edgedelta, then adding the following nodeSelector will restrict the agent pods to nodes that have the logging=edgedelta label.

    logging: edgedelta


To learn more about node selectors and affinity, please review this article from Kubernetes.

Special Considerations for SELinux and Openshift Users

If you are running a SELinux-enforced Kubernetes cluster, then you need to add the following securityContext configuration into edgedelta-agent.yml manifest DaemonSet spec. This update will run agent pods in privileged mode to allow the collection of logs of other pods.

runAsUser: 0
     privileged: true

In an OpenShift cluster, you need to also run the following commands to allow agent pods to run in privileged mode:

oc adm policy add-scc-to-user privileged system:serviceaccount:edgedelta:edgedelta
oc patch namespace edgedelta -p \
'{"metadata": {"annotations": {"": ""}}}'

Output to Cluster Services in Other Namespaces

Edge Delta pods run in a dedicated edgedelta namespace.

If you want to configure an output destination within your Kubernetes cluster, then you must set a resolvable service endpoint in your agent configuration.

For example, if you have an elasticsearch-master Elasticsearch service in the elasticsearch namespace with port 9200 in your cluster-domain.example cluster, then you need to specify the elastic output address in the agent configuration:

       - http://elasticsearch-master.elasticsearch.svc.cluster-domain.example:9200

To learn more, please review this article from Kubernetes.

Review Sample Configuration

The following sample configuration displays a default configuration that can be deployed. 

You can comment (or uncomment) parameters as need, as well as populate appropriate values to create your desired configuration.

#Configuration File Version (currently v1 and v2 supported)
version: v2

#Global settings to apply to the agent
  tag: kubernetes_onboarding
    level: info
  anomaly_capture_size: 1000
  anomaly_confidence_period: 30m

#Inputs define which datasets to monitor (files, containers, syslog ports, windows events, etc.)
  #Kubernetes Input type allows you to specificy pods and namespaces that you want to monitor, as well as exclude subsets
    - labels: "kubernetes_logs"
        - "namespace=.*"
        - "namespace=kube-system"
        - "namespace=kube-public"
        - "namespace=kube-node-lease"
        - "pod=edgedelta"
      auto_detect_line_pattern: true
#  files:
#    - labels: "system_logs, auth"
#      path: "/var/log/auth.log"
#  ports:
#    - labels: "syslog_ports"
#      protocol: tcp
#      port: 1514

#Outputs define destinations to send both streaming data, and trigger data (alerts/automation/ticketing)
  #Streams define destinations to send "streaming data" such as statistics, anomaly captures, etc. (Splunk, Sumo Logic, New Relic, Datadog, InfluxDB, etc.)
    ##Sumo Logic Example
    #- name: sumo-logic-integration
    #  type: sumologic
    #  endpoint: "<ADD SUMO LOGIC HTTPS ENDPOINT>"

    #Splunk Example
    #- name: splunk-integration
    #  type: splunk
    #  endpoint: "<ADD SPLUNK HEC ENDPOINT>"
    #  token: "<ADD SPLUNK TOKEN>"

    ##Datadog Example
    #- name: datadog-integration
    #  type: datadog
    #  api_key: "<ADD DATADOG API KEY>"

    ##New Relic Example
    #- name: new-relic-integration
    #   type: newrelic
    #   endpoint: "<ADD NEW RELIC API KEY>"

    ##Influxdb Example
    #- name: influxdb-integration
    #  type: influxdb
    #  endpoint: "<ADD INFLUXDB ENDPOINT>"
    #  port: <ADD PORT>
    #  features: all
    #  tls:
    #    disable_verify: true
    #  token: "<ADD JWT TOKEN>"
    #  db: "<ADD INFLUX DATABASE>"

  ##Triggers define destinations for alerts/automation (Slack, PagerDuty, ServiceNow, etc)
    ##Slack Example
    #- name: slack-integration
    #  type: slack

#Processors define analytics and statistics to apply to specific datasets
    name: clustering
    num_of_clusters: 50          # keep track of only top 50 and bottom 50 clusters
    samples_per_cluster: 2       # keep last 2 messages of each cluster
    reporting_frequency: 30s     # report cluster samples every 30 seconds

#Regexes define specific keywords and patterns for matching, aggregation, statistics, etc.
    - name: "error_level"
      pattern: "ERROR|error|Error|Err|ERR"
        anomaly_probability_percentage: 95

    - name: "exception_check"
      pattern: "Exception|exception|EXCEPTION"
        anomaly_probability_percentage: 95

    - name: "fail_level"
      pattern: "FAIL|Fail|fail"
        anomaly_probability_percentage: 95

    - name: "info_level"
      pattern: "INFO|info|Info"

    - name: "warn_level"
      pattern: "WARN|warn|Warn"

    - name: "debug_level"
      pattern: "DEBUG|debug|Debug"

    - name: "success_check"
      pattern: "Success|SUCCESS|success|Succeeded|succeeded|SUCCEEDED"

#Workflows define the mapping between input sources, which processors to apply, and which destinations to send the streams/triggers to
      - kubernetes_logs
      - clustering
      - error_level
      - info_level
      - warn_level
      - debug_level
      - fail_level
      - exception_check
      - success_check
#      - streaming_destination_a    #Replace with configured streaming destination
#      - streaming_destination_b    #Replace with configured streaming destination
#      - trigger_destination_a      #Replace with configured trigger destination
#      - trigger_destination_b      #Replace with configured trigger destination


View Your Agent Version 

  1. In the Edge Delta App, on the left-side navigation, click Data Pipeline, and then click Pipeline Status
  2. Navigate to the Active Agents table.
  3. Review the Agent Version column for your corresponding agent. 

Upgrade the Agent

If your agent's image is set to agent:latest, then new agent instances will run on the latest agent version.

  • The original agent instance will run on the agent version that was available during agent deployment.

If your agent's image is set to a specified version, then new agent instances will run on the specified version, not the latest agent version. 

  • To upgrade all agents to the latest version, replace the previously configured version number with the latest agent version, such as agent:v0.1.15.
    • Review the following example:
    • The latest version of the agent is listed in the Edge Delta Releases page.

Uninstall Edge Delta Agent

To remove all agent-related resources, run the following command: 

kubectl delete -f

Share this document