Overview
This output types sends logs to a CloudServer endpoint.
Note
In the Edge Delta App, when you create an integration or an individual output, similar parameters will display. As a result, this document applies to both outputs and integrations.
Review Parameters
Review the following parameters that you can configure in the Edge Delta App:
Visual Editor | YAML | Description |
Name | name |
Enter a descriptive name for the output or integration. For outputs, this name will be used to map this destination to a workflow. This parameter is required. Review the following example: name: my-zenko-cloudserver |
Integration | integration_name |
This parameter only appears when you create an individual output. This parameter refers to the organization-level integration created in the Integrations page. If you enter this name, then the rest of the fields will be automatically populated. If you need to add multiple instances of the same integration into the config, then you can add a custom name to each instance via the name field. In this situation, the name should be used to refer to the specific instance of the destination in the workflows. This parameter is optional. Review the following example: integration_name: orgs-zenkcs |
Not applicable | type |
Enter zenko. This parameter is required. Review the following example: type: zenko |
Endpoint | endpoint |
Enter the Zenko endpoint. This parameter is required. Review the following example: endpoint: https://XXXXXXXXXX.sandbox.zenko.io |
Bucket | bucket |
Enter the desired Zenko bucket to send the archived logs. This parameter is required. Review the following example: bucket: ed-test-bucket-zenko |
Access Key | access_key |
Enter the access key that has permissions to upload files to the specified bucket. This parameter is required. Review the following example: access_key: my_access_key_123 |
Secret Key | secret_key |
Enter the secret key associated with the specified access key. This parameter is required. Review the following example: secret_key: my_secret_key_123 |
Compression | compress |
Enter a compression type for archiving purposes. You can enter gzip, zstd, snappy, or uncompressed. This parameter is optional. Review the following example: compress: gzip |
Encoding | encoding |
Enter an encoding type for archiving purposes. You can enter json or parquet. This parameter is optional. Review the following example: encoding: parquet |
Use Native Compression | use_native_compression |
Enter true or false to compress parquet-encoded data. This option will not compress metadata. This option can be useful with big data cloud applications, such as AWS Athena and Google BigQuery.
This parameter is optional. Review the following example: use_native_compression: true |
Buffer TTL | buffer_ttl |
Enter a length of time to retry failed streaming data. After this length of time is reached, the failed streaming data will no longer be tried. This parameter is optional. Review the following example: buffer_ttl: 2h |
Buffer Path |
buffer_path |
Enter a folder path to temporarily store failed streaming data. The failed streaming data will be retried until the data reaches its destinations or until the Buffer TTL value is reached. If you enter a path that does not exist, then the agent will create directories, as needed. This parameter is optional. Review the following example: buffer_path: /var/log/edgedelta/pushbuffer/ |
Buffer Max Size |
buffer_max_bytesize |
Enter the maximum size of failed streaming data that you want to retry. If the failed streaming data is larger than this size, then the failed streaming data will not be retried. This parameter is optional. Review the following example: buffer_max_bytesize: 100MB |
Review Sample Configuration
The following sample configuration displays an output without the name of the organization-level integration:
- name: my-zenko-cloudserver
type: zenko
endpoint: https://XXXXXXXXXX.sandbox.zenko.io
bucket: ed-test-bucket-zenko
access_key: my_access_key_123
secret_key: my_secret_key_123