Welcome to the latest One Model product release update. This article provides an overview of the product innovations and improvements to be delivered on 15 April 2026.
User Experience Improvements
SQL Explorer
- We are rolling out the ability to query information_schema.tables and information_schema.columns to extract table and column data. This will be rolled out over the next 2 months (ref 27033).
AI Insights in Storyboard Tiles
- Following the successful launch of Insights within the KPI Tile, we have extended these capabilities to Basic Charts. Users can now add context, narrative, and automated analysis directly within their chart visualizations.
Configure insights for Basic Charts via the Describe tab in Tile Settings, where you can choose between Static Text, Auto Insights, or Custom Insights (Analyze) modes. This update maintains a consistent experience with the KPI Tile, offering the same styling controls and permission-based access across your Storyboards (ref 26363).
KPI Tile Change Indicator
- We have introduced a new 'No Change' visual indicator for both List Tables and KPI Tiles to more clearly identify stable metrics. Additionally, a new 'No Change Indicator Color' setting has been added to the Design tab, allowing users to customize the color of this icon to match their dashboard's aesthetic, similar to existing increase and decrease indicators (ref 25245, ref 27552).
Tile Style Settings
- Tile Style Settings are now always accessible in Tile Settings, even when the tile's query returns no data. This allows users to pre-configure styling before data is available, or while you are still defining the query (ref 27043).
Data Ingestion
Improved Reliability for API Data Loads
“Stuck” Data Loads
- In some situations, data loads triggered by API integrations (such as Workday, SuccessFactors, and Greenhouse) could become stuck, preventing workforce data from being refreshed as expected. This could occur when a connection issue interrupted the handoff between the data extraction and loading steps, or when there is a high volume of files
- We have made a series of improvements to how One Model manages API data loads to address these scenarios. The system now better handles interruptions during the loading process. These changes reduce the likelihood of API runs getting stuck and improve the overall reliability of keeping your workforce data current. (ref 21465)
Improved Reliability and Performance for Large API Data Loads using Batch Loading
One Model has overhauled the ingestion pipeline for API data extractions, delivering significant improvements in reliability for high-volume data sources.
What changed
- Previously, loading data from API integrations (e.g. Workday, Greenhouse) relied on a single, long-running database operation to ingest all extracted files at once.
- For large datasets — potentially hundreds of gigabytes or more — this could run for over 12 hours. A timeout or failure at any point meant the entire load had to restart from scratch, wasting both time and compute.
- The ingestion pipeline has been redesigned so that each full/destructive load is now broken into a series of smaller, sequentially processed batches.
- Each batch is independently tracked and retriable, making the overall pipeline far more resilient.
What this means for you
- Resilient retries — if a batch fails, only that batch needs to be retried. There is no longer any need to retrigger the full extraction and start over from the beginning.
- Cleaner Data Loads view — individual data load files are no longer created for each file being ingested during an API extraction, reducing clutter in the Data Loads page.
- Cleaner data load lifecycle — a data load record is only created once extraction and ingestion are fully complete, meaning data is ready for processing when it appears. (ref 22523)
Data Connector Improvements
- In the universal Connector we completed front-end support for configuring the same query parameter as both destructive and incremental. This finalises the two-part implementation by removing all references to the legacy column and adding UI support for multiple parameters sharing the same name (ref 27508).
- Migrated the Taleo data source configuration UI from Knockout.js to React, bringing it in line with the modern front-end stack. The migration preserves full feature parity with the previous implementation, including all Taleo-specific fields, validation, and clone functionality (ref 27615).
Minor Improvements & Bugs Squashed
- We have removed "Beta" labels from KPI tiles in the Storyboard builder and Query Designer now that the feature is generally available (ref 27754).
- We resolved a minor navigation glitch in the Query Designer where the UI could occasionally switch from the Describe tab to the Discover tab if clicked before the page fully loaded (ref 27738).
- We have updated the color picker to use smart positioning, ensuring Hex and RGB input fields remain fully visible and accessible at all browser zoom levels. The picker now automatically detects available screen space and renders either above or below the selection swatch to prevent manual input fields from being cut off or hidden off-screen (ref 25065).
- Users will now see a success message after successfully completing actions such as Storyboard edits and saves, Homepage edits and saves and pinning a Storyboard tile to the homepage (ref 24102).
- We’ve resolved an issue where Site Validation continued to display 'node(s) from dimension' errors even after a query was corrected. The fix ensures that once a dimension node is deselected and no longer in use, it is fully removed from the widget definition rather than being stored with false flags. This prevents obsolete metadata from triggering persistent validation errors and eliminates the need to manually delete and re-add dimensions to clear the message (ref 23458)
- We resolved an issue where storyboards containing annotations saved with a null or invalid value caused an internal server error, preventing the storyboard from loading for any user. Storyboards with this condition now load correctly (ref 27806).
- For Non API data source (File/Redshift/SFTP), we fixed a bug in the Data Consolidation Service where file count-based batching could cause a consolidated file to expire and be sent for processing before all files had finished consolidating into it, potentially resulting in incomplete data being processed downstream (ref 27639).
- In the Universal Connector we fixed a missing validation rule in Date Range variables that allowed the "From Date" to be set later than the "To Date" without any warning or error. The configuration will now display an inline validation message and prevent saving until the dates are corrected (ref 27298).
- Fixed an issue where an SFTP run would fail silently when no files were configured on the data source, giving the user no indication of what had gone wrong. The SFTP Runs page will now display a clear message indicating that no file inputs are configured (ref 27254).
Comments
0 comments
Please sign in to leave a comment.