You’ll be presented with an editor that looks like this, it has a set of fields that you’ll want to fill out that define the file that you are loading and how it is structured:

You’ll be presented with an editor that looks like this, it has a set of fields that you’ll want to fill out that define the file that you are loading and how it is structured:

For our example, we are going to write a load definition for a HRP1000 file. First up we will want to specify the fileNamePattern so that it expects to do full loads of the file, if we were wanting to use the data source input as an incrementally loaded file, we would need to also specify the incrementalFileNamePattern and isKey and keyOrder on all of the columns that are part of the incremental key. For now we are going to ignore the incremental bits and specify:

fileNamePattern: hrp1000%.dat

We want to specify the % pattern to allow for other things to be put in the name after hrp1000 but before the extension. The reason we want to set the files up like this is because of the way we are going to be loading these files to the site. When we use the Upload button on the data source to upload files to the site, it adds a timestamp to the files.

Next we want to setup the rest of the remaining structural items for this data source:

schema: sap
table: hrp1000
delimiter: '|'
encoding: Utf8
hasHeaderRows: True
headerRowsCount: 1
quoteCharacter: '"'

The schema is important, this defines where the files will go in the database. We usually recommend to keep the files in separate schemas based on what system the files are from as it makes things easier to track down later when you are looking for them. We can process files that have new lines in the row as long as the field is delimited with the appropriate quotes using a standard csv format.

Next we need to setup the columns that the file is expecting to have in it. 

For string fields we need to set these as a minimum:

- name: objid
  type: String
  size: 20

For integers fields we need to set these as a minimum:

- name: divgv
  type: Integer

For numeric fields we need to set these as a minimum:

- name: divgv
  type: Decimal
  precision: 18
  scale: 4

For date or timestamp fields we need to set these as a minimum:

- name: begda
  type: Timestamp
  dateFormat:
  - yyyyMMdd
  - yyyy-MM-dd

For details on setting the correct date format or timestamp format, please see the help article on it here.

Once you've finished setting up all of the columns for the data source, it'll be ready to save, if you haven't configured something correctly, it'll tell you when you go to save it. After it saves, if you reload the script later, it will have a lot of extra configuration options showing up empty, this is normal. A script for the hrp1000 table once finished, saved and loaded later would look like this:

fileNamePattern: hrp1000%.dat
incrementalFileNamePattern:
schema: sap
table: hrp1000
delimiter: '|'
encoding: Utf8
fileNameColumn:
fileNameColumnIsKey: false
hasHeaderRows: True
headerRowsCount: 1
quoteCharacter: '"'
columns:
- name: objid
  type: String
  dateFormat: []
  decimalSeparator: .
  defaultValue:
  nullEquivalent:
  groupSeparator:
  isKey: false
  keyOrder:
  size: 20
  precision:
  scale:
- name: begda
  type: Timestamp
  dateFormat:
  - yyyyMMdd
  decimalSeparator: .
  defaultValue:
  nullEquivalent:
  groupSeparator:
  isKey: false
  keyOrder:
  size:
  precision:
  scale:
- name: endda
  type: Timestamp
  dateFormat:
  - yyyyMMdd
  decimalSeparator: .
  defaultValue:
  nullEquivalent:
  groupSeparator:
  isKey: false
  keyOrder:
  size:
  precision:
  scale:
- name: plvar
  type: String
  dateFormat: []
  decimalSeparator: .
  defaultValue:
  nullEquivalent:
  groupSeparator:
  isKey: false
  keyOrder:
  size: 5
  precision:
  scale:
- name: otype
  type: String
  dateFormat: []
  decimalSeparator: .
  defaultValue:
  nullEquivalent:
  groupSeparator:
  isKey: false
  keyOrder:
  size: 5
  precision:
  scale:
- name: langu
  type: String
  dateFormat: []
  decimalSeparator: .
  defaultValue:
  nullEquivalent:
  groupSeparator:
  isKey: false
  keyOrder:
  size: 5
  precision:
  scale:
- name: mc_short
  type: String
  dateFormat: []
  decimalSeparator: .
  defaultValue:
  nullEquivalent:
  groupSeparator:
  isKey: false
  keyOrder:
  size: 100
  precision:
  scale:
- name: mc_stext
  type: String
  dateFormat: []
  decimalSeparator: .
  defaultValue:
  nullEquivalent:
  groupSeparator:
  isKey: false
  keyOrder:
  size: 256
  precision:
  scale:
concepts:
- name:
  primaryKey:
  effectiveDate:
  endDate:
  condition:
linkedConcepts:
- fromConcept:
  fromColumn:
  toConcept:
  alias: 
Did this answer your question?