Configure a Data Destination

How to configure a Data Destination - including SFTP.

What is a Data Destination?

A data destination allows users to specify an externally hosted location to output data from the One Model application. A data destination can include several files, and can output to:

  • SFTP

  • Amazon (S3 and Redshift)

  • Azure (Blob Storage and Data Lake Storage Gen 2)

  • OneAI - used exclusively for the OneAI platform, and cannot be accessed externally

The files that are put into a Data Destination can be created from:

  • A List query, either from Explore or a Storyboard

  • Pipeline Script

Configure an SFTP Data Destination

To configure an SFTP Data Destination in One Model, navigate to Data -> Destinations. Then select Add Data Destination, and select SFTP:

From there, configure the required settings for your data destination:
Display Name - The name of the Destination as it will appear in One Model.

Bundle Compression - Select whether the files should be compressed before they are sent. The Zip format is currently supported.

Replace Existing Files - Whether files of the same name that already exist on the server should be deleted when the Data Destination runs, or cause the Data Destination to fail. Files sent from a data destination include a Timestamp in the name, so this shouldn't occur.

Upload File Path - The path on the SFTP server that the files will be uploaded to. The SFTP user will need to have access to write to this path. This can be a value like "/" for the base directory that was configured for the FTP user, or "/SubDirectoryName/" to navigate to a sub-directory that the user has access to.

Validate Host Connection - If this is checked, a Host Public Key for the SFTP server can be used for Two-Factor Authentication on the SFTP server (should not be checked if using a private key).

Host Public Key - They key that can be used for Two-Factor Authentication of the SFTP server if Validate Host Connection is enabled.

Host URL - This is the address of the SFTP server.

Port - The port number to be used for SFTP. The standard is 22, but Firewall rules may prohibit the use of this port.

Authentication Method - authentication on the SFTP server via either a Password, or a generated Private Key are supported.

Username - The account on the SFTP server that One Model will use to gain access.

Password - The Password associated with the SFTP username. Only required for the Password Authentication Method.

Private Key - The Private Key associated with the SFTP username. Only required for the Private Key Authentication Method.


Configure a One AI Data Destination

Details for how to configure a One AI Data Destination, and how to use the data within it can be found in our One AI - Data Destination Files help document.

Adding files to a Data Destination

Once the Data Destination has been configured, files can be added to the destination from either an Explore Query source, a Pipeline Processing Script, or Database Tables:


Adding a file from a Query Source will open Explore, where a List report can be configured:


It's also possible to add a new file to the Data Destination by using "Add to Data Destination" directly from within Explore, instead of adding the file on the Data Destinations page. 


As an alternative to Explore, it's also possible to use Processing Script, which will allow the destination to use our Pipeline Script language. This allows selection of data from any of the tables that exist in the data model. It also supports joins between tables, and many other SQL transformations between them as well.

Finally, it's also possible to add tables straight from the database using the Database Tables Destination File type. This allows for selection of both Data Source tables, as well as tables that are included in the Model (these are usually in the "one" schema):

This is the simplest way to bulk export a large number of tables that you may want to use externally, such as in Tableau or Power BI.

Running a Data Destination

Once files have been added to a Data Destination, it can be run. This can be done manually using the Run button to immediately kick off the process:


It's also possible to send files on a regular basis automatically using the Schedule button:


This will then allow files to be setup on a range of different schedules at times as required:


View Data Destination History

It's also possible to view the history of when a Data Destination has run. To do this, on the Data Destinations page, go to the History button:


This will take you to the History page, which shows the list of times that the Destination has run:


From there, it's possible to see further information about each run. This can be used to trace errors, and see a list of the files that were sent. For users who have the CanAccessRawData permission, it's also possible to Download a copy of the files that were sent up to thirty days after they were sent.



Was this article helpful?

0 out of 0 found this helpful



Please sign in to leave a comment.