Creating Attrition Risk Models Metrics & Storyboards

Description: This guide helps you create the metrics and storyboard used to interpret a Voluntary Attrition (Flight Risk) classification model in One Model. Building these assets typically requires coordination with your customer success team and data engineer to add the necessary code to your processing script so the model output tables and dimensions exist for metric and storyboard creation.

In most cases, if you are deploying another classification model using the same general output shape, you can reuse existing tables and dimensions and may not need additional data engineering work.

Module Type: Functional function sym.png

Level: Intermediate-Advanced I-Spaceship.svg

Audience: Model & storyboard creators

Prerequisites: Access to and experience creating metrics & storyboards in One Model. "One AI Recipes", "Voluntary Attrition Risk Model", & "Model Deployment" modules 

Installation Instructions

There are a few recommended steps that should be completed in this order to get started building attrition risk metrics and storyboards:

  1. Deploy a Voluntary Attrition classification model with SHAP enabled in the global settings.

  2. Submit a ticket to have your data engineer add the required code to your processing script.

  3. Allow your One Model site to reprocess.

  4. Meet with your CS advisor to review the storyboard template structure and decide what you want to build and publish.
    Note: Only create by-name and employee-level results if you have internal approval to do so.

  5. Using the metric guide, create the necessary metrics.

  6. Using the storyboard guide, work with the CS team to create the sections of interest.

  7. Ensure storyboard viewers have the appropriate data access to review.

The One Model team is here if you get stuck!

Voluntary Attrition Metric Guide

Use the Metrics for Voluntary Attrition Risk Models Google Sheet as the source of truth for metric definitions and calculations and to build metrics to visualize your model results. You are welcome to modify metrics for your organizational needs. Ensure you are only providing metric and storyboard permission to appropriate data access roles.

The standard Flight Risk template usually requires metrics in a few categories:

  • Model scope & volume: who is scored and how many predictions exist for the latest run.

  • Model performance: how well the model identifies employees likely to voluntarily leave vs. stay (typically reported by label).

  • Driver summaries: SHAP-based summaries that support “what’s influencing risk” views at the overall level and, optionally, for a selected individual.

  • Risk distributions: risk bucket metrics (Low / Medium / High) and breakouts by key dimensions to show where risk concentrates.

Note: Many metrics are filtered to a specific augmentation. In your build, make sure the augmentation filter matches your Voluntary Attrition augmentation name.

Additionally, many metrics are filtered by label value (e.g., Termination, No Termination). Model creators set these labels in the One AI Recipe Screen during the "Give your prediction target meaningful labels" step. You can customize these names to suit your organization's needs, but we recommend keeping label names consistent across models of the same recipe type to avoid needing new metrics for each model.

Attrition Risk Storyboard Template Descriptions

This section describes each section of the standard Voluntary Attrition (Flight Risk) storyboard and the value it provides. Your CS Advisor can show you this storyboard live upon request.

1. Model Information & Performance

What it is
A quick snapshot of what the model predicts, who it covers, and how well it performs.

What you will typically see
A plain-language “About the model” statement describing the prediction frame (employees in headcount and the timeframe). Model context such as method, population size, how many employees were flagged at risk, prediction rate, features selected, and deployment/run details. A performance view showing how well the model distinguishes Termination vs. No Termination using F1, precision, and recall.

Why it matters
This sets expectations before anyone interprets the drivers or lists. It’s the fastest check for “is this model reliable enough for exploration and discussion?”

Important interpretation reminder
Performance often differs by label. Attrition is usually the harder class to predict well—interpret the rest of the storyboard with that context.

2. Drivers and Directionality

What it is
A transparent view of what the model relies on most, and whether those factors generally push predictions toward retention or toward attrition.

What you will typically see
Ranked driver summaries for Retention and for Attrition, plus a directionality view showing which features tend to increase attrition risk vs. reduce it in aggregate. This is powered by SHAP so viewers can see both impact and direction.

Why it matters
Stakeholders get clear answers to “what signals are associated with higher risk?” Analysts can sanity-check whether those signals match expectations and investigate surprising drivers.

Important interpretation reminder
Drivers explain model behavior based on historical patterns. They do not prove causation or prescribe interventions.

3. Where Does Risk Sit

What it is
A distribution view that shows how attrition risk is spread across the population and across key segments.

What you will typically see
Employees grouped into Low / Medium / High risk buckets (based on defined thresholds), plus risk distributions by selected dimensions (for example, org unit, tenure bands, managerial status, generation).

Why it matters
It turns individual risk into an organizational story: where risk concentrates, which groups are overrepresented in higher risk, and where follow-up cuts might be useful.

Important interpretation reminder
Buckets are a communication tool. Thresholds should be set intentionally based on your organization’s risk tolerance and use case.

4. Forecasts

What it is
A forward-looking view of attrition trends to support planning conversations.

What you will typically see
A voluntary separation trend over time extended into future periods. Some templates also include supporting context trends (for example, operational metrics used to frame the story) depending on what your organization chooses to publish.

Why it matters
Forecasts help leaders anticipate directionality (up/down/stable) and decide where deeper analysis is needed. They’re especially useful when paired with segmentation views from the risk distribution section.

Important interpretation reminder
Forecasts are projections from historical patterns and are sensitive to data coverage, seasonality, and organizational change.

5. By-name List(s)

What it is
A practical list view that lets approved audiences review who falls into higher risk groups and add business context.

What you will typically see
A by-name list (often focused on high risk) with employee identifiers and relevant context fields (job family, pay grade, manager flag, demographic fields if included, and other review-friendly context). It is typically designed to be filterable so leaders can narrow to their org or subgroup.

Why it matters
This supports structured talent conversations: “who looks at risk,” “where are the clusters,” and “what should we validate further?” without losing the model context.

6. Employee-level Analysis

What it is
A drill-down explainer for one employee at a time, intended for careful review with the right stakeholders—not self-service decisioning.

What you will typically see
Nothing appears until the viewer selects a single person from a Person (Predictions) filter. Once selected, the storyboard shows the employee’s predicted risk and explanation views: which features pushed the prediction toward Termination vs. No Termination, and how the employee’s feature values compare to the model population average. Many templates also include a readable explanation table that describes each feature’s directional contribution in plain language.

Why it matters
This is the “why did the model score this person this way?” section. It makes risk explainable and supports thoughtful discussion when individual review is appropriate.

Important interpretation reminder
Person-level explanations describe model reasoning for a single prediction; they should be used as inputs to discussion, not as automated decisions or guarantees.

Was this article helpful?

0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.