Introduction
One AI Assistant is a powerful generative AI tool that transforms free text questions or prompts into detailed charts and tables from your organization’s people data in One Model. Its user-friendly interface allows users to easily generate queries, interact with data, and refine selections. Since the assistant relies on a large language model (LLM) and a vector database to interpret prompts and match them with your data, clear prompts and deliberate configuration are essential for accurate results.
One AI Assistant processes user prompts by sending them to the LLM, which recognizes important components like metrics, dimensions, and time selections. The LLM doesn’t actually store your data; instead, it works as a bridge, identifying what you are asking for and sending those details back to One Model. From there, the vector database matches the recognized elements to the actual data stored in One Model. The vector database helps ensure that the query pulls the right metrics and dimensions based on the context provided in the prompt.
Thoughtful configuration reduces the chances of conflicts in how data is interpreted and matched. Deliberate configuration and avoiding ambiguity in metrics and dimensions increases the likelihood of receiving relevant, precise outputs and can fully leverage One AI Assistant’s capabilities.
Admin & Configuration Best Practices
- Be deliberate in configuration: When configuring One AI Assistant, site administrators should carefully select which metrics and dimensions the assistant can access. Striking a balance is key—include enough data to support meaningful and interesting queries, but avoid overwhelming the assistant with similarly named metrics, dimensions, and dimension nodes. If a metric or dimension is rarely used or only relevant to a small group, it may not need to be included. Focus on metrics and dimensions that are widely relevant and useful, and avoid overloading the assistant with similar options like multiple "headcount" or "termination" metrics. This ensures the assistant can efficiently find and return the right data.
- Exclude "Previous" dimensions: "Previous" dimensions (e.g., previous location, previous performance rating) are typically used for internal movement metrics, like transfers or promotions, and don’t apply to most other types of queries. If this applies to your One Model instance, consider excluding these dimensions from your One AI Assistant configuration since these dimensions contain the same nodes as their “Current” counterparts. This helps avoid confusion and irrelevant outputs, especially for casual users who may not fully understand the specific data requirements needed for these dimensions.
- Avoid including multiple dimensions with similar nodes: Be mindful when configuring dimensions that have the same or very similar nodes (e.g., "work location" and "home location" both containing "Chicago"). If multiple dimensions share identical nodes, One AI Assistant may struggle to determine which one to use when you query data (e.g., "show me headcount over the last 12 months in Chicago"). To prevent confusion and inaccurate results, only include one relevant dimension, unless you are prepared to make minor edits in the query builder.
- Exclude metrics and dimensions built for One AI predictive model storyboards: Avoid including metrics and dimensions that were built specifically for One AI predictive model storyboards. Since there’s currently no way to track the machine learning model's performance, when it was last run, or how these metrics interact with other data on the site, including them may result in inaccurate or misleading outputs.
- Exclude metrics that can be reached with dimension filters: If a metric can be reached using a dimension filter, don’t include it as a separate metric. This reduces the number of similarly named metrics and helps One AI Assistant return accurate results. For instance, instead of including a separate "Headcount - Female" metric, use the standard "Headcount (EOP)" metric and apply a gender filter for women. These metrics are typically included in order to calculate other metrics, such as “Headcount % - Female”. These calculated metrics are okay to include.
- Provide clear metric definitions for user clarity: Ensure that metric definitions are clear and easy to understand. These can be edited in the metric editor and are important because when users drillthrough a data point, the metric definition is also displayed. This helps more casual users verify that One AI Assistant is using the correct metric for their query, giving them confidence in the results. Additionally, admins can download all metric definitions from the ‘Metrics’ admin report to provide a "metric dictionary" for users who don’t have drill-through access, ensuring everyone has a reference to the metrics used.
Prompting Best Practices
- Be specific, yet concise: Clearly state the metrics, dimensions, and time ranges you want to analyze. Avoid vague requests like "show performance" and instead ask for "show headcount for the last 12 months by department." While it's important to be specific, avoid overloading your prompt with unnecessary details. A clear, concise prompt is easier for the LLM to interpret correctly.
- Dimension levels: If you know the specific dimension level you want to see the metric by, include it in your prompt (e.g., “show terminations for the last 5 years by sup org L3”). Otherwise, the dimension will default to level 1, and you’ll need to modify the query in the query panel to select the appropriate level.
- Use familiar terms: While the vector database can understand different ways of referring to common metrics and dimensions, using terms that are exact or as close as possible to those in your data will improve accuracy (e.g., if your data uses "location," prompt with that instead of "office site").
- Avoid ambiguity: If there are similarly named metrics and dimensions (e.g., "work location" and "home location"), make sure to specify which one you mean. Avoid terms that could apply to multiple fields when possible.
- The One AI team is also developing a configuration feature for site administrators to prioritize specific metrics and dimensions. For example, if your instance has multiple Headcount (EOP) metrics (e.g., Headcount (EOP), Headcount (EOP) - Women, Headcount (EOP) - Managers), admins can prioritize which version of "Headcount" is returned when users simply prompt "headcount."
- The One AI team is also developing a configuration feature for site administrators to prioritize specific metrics and dimensions. For example, if your instance has multiple Headcount (EOP) metrics (e.g., Headcount (EOP), Headcount (EOP) - Women, Headcount (EOP) - Managers), admins can prioritize which version of "Headcount" is returned when users simply prompt "headcount."
- Include context: When needed, add additional information to narrow down the results. For example, specify time periods ("last quarter" or "past year") and organizational units (e.g., "finance team" or "sales department") for more precise outputs.
- Refine as needed: If the initial output isn’t exactly what you want, rephrase the prompt or add more details. For instance, if the result is too broad, narrow it down by including additional filters. If you have drill through access, drillthrough data points in the query view the metric definition to validate when you are unsure if the assistant selected the correct metric.
Examples of Prompt Patterns
Substitute your metric names, dimension names, and time selections for the bracketed text in these examples. These can be combined into more complex prompts.
- [metric name(s)]
- [metric name] and [metric name]
- [metric name(s)] for [time selection(s)]
- [metric name(s)] by/for [time trend]
- [metric name(s)] by [dimension name(s)]
- [metric name] for [dimension selection(s)]
- top/bottom/highest/lowest [dimension name] for [metric name]
- top/bottom/highest/lowest [number] [dimension name] for [metric name]
- forecast [metric name(s)]
- [metric name(s)] by/for [time trend] and include a forecast
Conclusion
Following these best practices for configuring and using One AI Assistant ensures that users can generate accurate and meaningful insights from their data. By being deliberate in configuration, avoiding redundant or ambiguous metrics, and crafting clear prompts, you can maximize the effectiveness of the assistant while minimizing confusion. Thoughtful setup and prompt design will empower all users—whether power users or casual users—to confidently explore and interact with their data.
Comments
0 comments
Please sign in to leave a comment.