Release Notes - 2019.08.15

Welcome to the 2019.08.15 product release.  This article provides an overview of the product innovations and improvements to be delivered on 15 August 2019.  You'll see a minor change in the notes below where we have a reference number for each item that ties back to our One Model development system.
The article is structured as follows:

New Features & Innovations

  • Improvements to One Model’s API Connector for SuccessFactors oData API

  • Improvements to One Model’s API Connector for MaritzCX

  • One Ai Innovations

Bugs, Performance & Platform Improvements

  • Improvements to Data Pipeline Processing

  • Improvements to Data Ingestion Framework

  • Performance & Stability

  • Bugs Fixed and Minor Improvements

New Features & Innovations

Improvements to One Model’s API Connector for SuccessFactors oData API

  • Users can now configure which Linked Tables are downloaded from the SuccessFactors API, instead of downloading all of them. Linked tables are used for additional related data, such as Currency Conversion or Labels, but many of these may not be required for Analytics purposes. (ref 2258)

  • Users can now select which non-mandatory columns are included in tables brought through by the SuccessFactors connector. Reducing the number of columns returned reduces the size of the dataset and therefore the time taken to download. (ref 2406)

Improvements to One Model’s API Connector for MaritzCX

  • This update allows customers to select which of their available surveys they would like to include in the One Model data load from a Maritz API. (ref 2151)

One Ai Innovations

  • Users can now select specific transformations & processing on a per column basis. If you want to fill null values with the mean value in one col and the mode value in another you can now do this via PCI settings in the augmentation GUI.  (ref 2336)

  • CV-folds are now available through the advanced configurations section on Augmentations. Users can configure if they want to use a train, test, and split strategy or a CV-fold strategy. If using a CV fold users can configure how many folds they want OneAi to use. (ref 498)

  •  Per column type coercion as been added to the yml configuration to allow user to force OneAi to treat a column as a specific type. (ref 404)

  • Added support for several new types of regression algorithms: RandomForest, Ridge, SGDRegressor, SVR, GaussianProcessRegressor, and HuberRegressor. (ref 482)

Bugs, Performance & Platform Improvements

Improvements to Data Pipeline Processing

  • A framework level change was implemented to prevent users from clearing and reloading cache for an instance while new data is being processed. (ref 2360)

  • Added support for CASE statement in the JOIN clause when joining queries together in Pipeline Script. (ref 2113)

  •  Added pipeline processing validation for non-obvious type coercions, for example, comparing a number to a string without casting the number. (ref 2243)

  • Validation of the Pipeline Script will now correctly capture cases where a column referenced from an earlier transformation table doesn’t exist because it has been commented out. (ref 2519)

  • Pipeline processing - allow the usage of Alias columns in Group By. (ref 2460)

Improvements to Data Ingestion Framework

  • Allow users to start a new data load by cancelling any current running data loads. This allows for faster turnaround times when making changes to the Pipeline script. (ref 1914)

  • Allow users to cancel data loads in the Data Loads screen that are in the Cache Warming, Custom SQL, or Model Processing steps. (ref 2400)

  • Allow users to Retry an API run from the task that has failed without reloading all tasks that have already succeeded. (ref 2405)

  • Implemented a platform improvement to the performance of data loading, especially for large batches of files. This can be configured in the Data Source page for Flat Files. (ref 2173)

  • Improved the ability to capture errors when loading data from the ADP API Connector.  (ref 2297)

  • Added an improvement to the Greenhouse API Connector in One Model that allow for configuration of the Candidate Activity Feed to support deselection of data that is not relevant to Analytics (such as Candidate Notes, Candidate Emails).  (ref 2471)

  • Add an option for the Greenhouse API Connector in One Model for scheduled runs that allows a user to specify an override to the last run utc date that we use to extract the incremental, so it can be a rolling x months value, getting a date to start from x months ago.  (ref 2274)

Performance & Stability

  • Continued stability improvements with system caching.

  • Prevent users from building and running queries with references to multiple time dimensions. (ref 2494)

  • Optimizations to support loading of thousands of files in a single data load (Ref 2509 & 2568)

Bugs Fixed and Minor Improvements

  • Add a column for user creation date to the user role export admin feature. (ref 2253)

  • Improved error messages for certain situations of incorrect joins for time filters. (ref 2181)

  • Fixed a bug where in certain circumstances a misconfigured of data source would get stuck at the validation phase of an upload by now correctly validating escape character delimiters in CSV. (ref 2338)

  • Added better error handling and a message when a user tries to add a file to a data destination when the file name already exists to avoid creating duplicates. (ref 2427)

  • Improved the stability of the Oracle API connector when processing large datasets. (ref 2409)

  • Added a validation to the user interface for configuring an SFTP Data Destination to only accept forward slashes and a leading slash and not backslashes in the file path. (ref 876)

  • Support removing a column from an incremental delimited data source. (ref 2525)

  • Fixes an issue where re-running a Data Destination from the Augmentation screen could prematurely show that the Data Destination had completed while it was still running. (ref 2549)

Was this article helpful?

0 out of 0 found this helpful



Please sign in to leave a comment.