Azure Blob Storage Archiving Output and Integration

George Alpizar
George Alpizar
  • Updated

Azure Blob Storage

This output type sends logs to an Azure Blob Storage endpoint.

Note

In the Edge Delta App, when you create an integration or an individual output, similar parameters will display. As a result, this document applies to both outputs and integrations.

Before you begin

Before you can create an output, you must have an account key.


Review Sample Configuration

The following sample configuration displays an output without the name of the organization-level integration:

    - name: my-blob
      type: blob
      account_name: {{ Env "BLOB_ACCOUNT_NAME" }}
      account_key: {{ Env "BLOB_ACCOUNT_KEY" }}
      container: testcontainer
      auto_create_container: false

Review Parameters

Review the following parameters that you can configure in the Edge Delta App.


name

Required

Enter a descriptive name for the output or integration.

For outputs, this name will be used to map this destination to a workflow.

Review the following example: 

name: my-blob

integration_name

Optional

This parameter refers to the organization-level integration created in the Integrations page. 

If you need to add multiple instances of the same integration into the config, then you can add a custom name to each instance via the name parameter. In this situation, the name should be used to refer to the specific instance of the destination in the workflows.

Review the following example: 

integration_name: blob_acct

type

Required

Enter blob.

Review the following example: 

type: blob

account_name

Required

Enter the account name for the Azure account.

Review the following example: 

account_name: '{{ Env "BLOB_ACCOUNT_NAME" }}'

container

Required

Enter the container name to upload.

Review the following example: 

container: testcontainer

auto_create_container

Optional

Create the container on the service, with no metadata and no public access.

Review the following example: 

auto_create_container: false

compress

Optional

Enter a compression type for archiving purposes. 

You can enter gzip, zstd, snappy, or uncompressed

Review the following example: 

compress: gzip

encoding

Optional

Enter an encoding type for archiving purposes. 

You can enter json or parquet

Review the following example: 

encoding: parquet 

use_native_compression

Optional

Enter true or false to compress parquet-encoded data.

This option will not compress metadata. 

This option can be useful with big data cloud applications, such as AWS Athena and Google BigQuery.

Note

To use this parameter, you must set the encoding parameter to parquet

Review the following example: 

use_native_compression: true

buffer_ttl

Optional

Enter a length of time to retry failed streaming data.

After this length of time is reached, the failed streaming data will no longer be tried.

Review the following example: 

buffer_ttl: 2h

buffer_path

Optional

Enter a folder path to temporarily store failed streaming data.

The failed streaming data will be retried until the data reaches its destinations or until the Buffer TTL value is reached.

If you enter a path that does not exist, then the agent will create directories, as needed.

Review the following example:

buffer_path: /var/log/edgedelta/pushbuffer/

buffer_max_bytesize

Optional

Enter the maximum size of failed streaming data that you want to retry.

If the failed streaming data is larger than this size, then the failed streaming data will not be retried.

Review the following example:

buffer_max_bytesize: 100MB

 


 

Share this document