What is a Data Destination?

A Data Destination allows users to specify a location to output data from the One Model application. A Data Destination can include several files, and can output to:

  • SFTP
  • Amazon S3
  • OneAI - used exclusively for the OneAI platform, and cannot be accessed externally

The files that are put into a Data Destination can be created from:

  • A List query, either from Explore or a Dashboard
  • Pipeline Script

Configure an SFTP Data Destination

To configure an SFTP Data Destination in One Model, navigate to Data -> Destinations. Then select Add Data Destination, and select SFTP:

From there, configure the required settings for your data destination:

Display Name
- The name of the Destination as it will appear in One Model.

Bundle Compression - Select whether the files should be compressed before they are sent. The Zip format is currently supported.

Replace Existing Files - Whether files of the same name that already exist on the server should be deleted when the Data Destination runs, or cause the Data Destination to fail. Files sent from a data destination include a Timestamp in the name, so this shouldn't occur.

Upload File Path - The path on the SFTP server that the files will be uploaded to. The SFTP user will need to have access to write to this path. This can be a value like "/" for the base directory that was configured for the FTP user, or "/SubDirectoryName/" to navigate to a sub-directory that the user has access to.

Validate Host Connection - If this is checked, a Host Public Key for the SFTP server can be used for Two-Factor Authentication on the SFTP server.

Host Public Key - They key that can be used for Two-Factor Authentication of the SFTP server if Validate Host Connection is enabled.

Host URL - This is the address of the SFTP server.

Port - The port number to be used for SFTP. The standard is 22, but Firewall rules may prohibit the use of this port.

Authentication Method - authentication on the SFTP server via either a Password, or a generated Private Key are supported.

Username - The account on the SFTP server that One Model will use to gain access.

Password - The Password associated with the SFTP username. Only required for the Password Authentication Method.

Private Key - The Private Key associated with the SFTP username. Only required for the Private Key Authentication Method.

Configure an Amazon S3 Data Destination

To configure an Amazon S3 Data Destination in One Model, navigate to Data -> Destinations. Then select Add Data Destination, and select Amazon S3:

From there, configure the required settings for your data destination:

Display Name - The name of the Destination as it will appear in One Model

Aws Region - The region that the S3 bucket is in

Bucket Name - The name of the S3 bucket that the files will be uploaded into

Upload File Path - The path in the S3 Bucket that the files will be uploaded to. This can be a value like "/" for the root directory in the bucket, or "/SubDirectoryName/" to navigate to a sub-directory.

AWS Access ID - The Access ID for the Amazon IAM (Identity and Access Management) user that has access to the bucket

AWS Secret Key - The Secret Access Key for the Amazon IAM user.

Note: One Model does not support Two-Factor authentication for Amazon S3

Configure a One AI Data Destination

Details for how to configure a One AI Data Destination, and how to use the data within it can be found in our One AI - Data Destination Files help document.

Adding files to a Data Destination

Once the Data Destination has been configured, files can be added to the destination from either an Explore Query source, or a Pipeline Processing Script:

Adding a file from a Query Source will open Explore, where a List report can be configured:

It's also possible to add a new file to the Data Destination by using "Add to Data Destination" directly from within Explore, instead of adding the file on the Data Destinations page. 

As an alternative to Explore, it's also possible to use Processing Script, which will allow the destination to use our Pipeline Script language. This allows selection of data from any of the tables that exist in the data model. It also supports joins between tables, and many other SQL transformations between them as well.

Running a Data Destination

Once files have been added to a Data Destination, it can be run. This can be done manually using the Run button to immediately kick off the process:

It's also possible to send files on a regular basis automatically using the Schedule button:

This will then allow files to be setup on a range of different schedules at times as required:

View Data Destination History

It's also possible to view the history of when a Data Destination has run. To do this, on the Data Destinations page, go to the History button:

This will take you to the History page, which shows the list of times that the Destination has run:

From there, it's possible to see further information about each run. This can be used to trace errors, and see a list of the files that were sent.

Did this answer your question?