One AI Assistant Generative AI Usage

General Information

One AI Assistant uses OpenAI large language models (LLMs) through One Model’s enterprise-grade OpenAI account. When you submit a prompt, various information can be sent to OpenAI depending on which assistant capability is being leveraged. This article outlines what that data is and when it is sent. Information sent to OpenAI endpoints is stored for 30 days for abuse monitoring purposes. Learn more about OpenAI's data retention controls here. None of this information or data is used to train OpenAI's models. Any fine-tuning performed by One Model is performed exclusively using synthetic examples generated by One Model. No customer data is used for model training. 

Usage of Generative AI in One AI Assistant

Generative AI is used for the following tasks in One AI Assistant:

Embeddings

One AI Assistant uses OpenAI’s embedding service to generate embeddings for user defined metrics, dimensions, dimension nodes, Storyboards, and Answers. Embeddings are numeric representations of words or phrases that help computers understand meaning and relationships between concepts. Once generated, these embeddings are stored in a vector database, hosted securely using pgvector within the One Model AWS environment. They are used in One AI Assistant to match terminology of a similar concept, such as matching a request for “Exits” to a “Terminations” metric.

What's sent to OpenAI? All metric names, dimension names, dimension node (category) names, Storyboard names, and Answers that have been permissioned for One AI Assistant.

Orchestration

When a prompt is submitted by a user in One AI Assistant, the first task the assistant performs is to determine which capability to leverage to fulfill the user's request. This process only occurs if "AI Mode" is selected from the mode selector. For each capability (mode) that the user has permission for, an assessment is run to determine if that service is a good fit for the user’s question. Some of these assessments leverage LLM tasks and some leverage embeddings searches. Once the assistant has determined which service it will utilize, it passes the prompt off to the downstream service to take over and provide the response to the user in the assistant.

What's sent to OpenAI? The user submitted prompt.

Answers

Matching of a user prompt to a configured Answer leverages embeddings. See the embeddings section above for more detail.

The Answers natural language response displayed under the question and above the chart leverages an LLM. Both the question that’s being answered and the data being displayed in the table or chart that answers the question are sent to the LLM. The LLM has instructions in the form of a system prompt. A natural language answer to the question that is a maximum of three sentences in length is generated and displayed below the question and above the chart or table.

Please note that this feature can be individually permissioned. Users can be given access to Answers without the natural language response if desired.

What's sent to OpenAI? The question that’s being answered as well as the data included in the chart or table. This is only the case if the Answers natural language response permission is enabled.

Visualizations

When a user submits a prompt, such as “Show me headcount by cost center for the last 12 months,” a number of fine-tuned LLMs are leveraged to identify essential elements, such as metrics, dimensions, dimension nodes, and time selections within the prompt. These LLMs are as follows:

  • Time Selection: This model extracts time-related information from the user’s request to ensure the correct time periods are analyzed. This includes forecasted time.
  • Time Model Detection: This model extracts time model-related information from the user’s request to apply relative time constraints to the query.
  • Chart Detection: This model figures out which type of chart (like a bar chart, line graph, etc.) best fits the data the user requested.
  • Series Detection: This model figures out which type of visual representation of each series the user is asking for.
  • Entity Classification: This model identifies metrics, dimensions (names of dimensions), and dimension nodes (categories in dimensions) in your request.
  • Dimension Node Inclusion/Exclusion: This model helps decide whether to include or exclude specific nodes of the dimensions in the user’s request.
  • Dimension Level Specification: This model determines whether the user is asking for nodes from a specific level of a dimension and if so, which level.
  • Dimension Pivoting: This model determines when certain keywords are used that should pivot the data.
  • Dimension Filter Count: This model determines if the user is asking for filtering and if so, what number of values to filter to.

These recognized elements are returned, where One AI Assistant searches your vector database to find the closest matches in your data.

Only data configured and permissioned for the assistant’s use during setup can be accessed, matched, and used in analysis. If a metric or dimension name was not embedded, it won’t be included in the assistant’s analysis, nor will it be shared with OpenAI.

What's sent to OpenAI? The user submitted prompt.

Insights

While Insights are not generated by an LLM, they are ranked by one. After a chart or table is generated, if a user clicks “Show Insights,” One AI Assistant may display up to five insight statements. The statements are ranked by an LLM based on their relevance and interest. Only the text of each insight is sent to OpenAI for ranking—the underlying data powering the chart or table is not shared at any point during this process.

The insight statements are sent to a separate LLM that generates a business recommendation for each one. Only the insight statement is sent to OpenAI. Please note that this feature can be individually permissioned. Users can be given access to Insights without the business recommendation if desired.

What's sent to OpenAI? The insight statements generated by One Model. The data included in the chart or table is not sent.

Analyze

Analyze uses OpenAI’s latest GPT-5 model to enable the user to analyze charts or tables in One AI Assistant. Available for both Visualizations and Answers if permissioned, Analyze sends the data the chart is representing to the LLM along with the user’s request. The richly formatted LLM response is then displayed in the assistant.

This feature can be individually permissioned by Application Access Role. Users can be given access to Visualizations and Answers without being granted access to Analyze.

Optional web search capability is also available for Analyze. Web search enhances Analyze with complete up-to-date information and sources cited with hyperlinks. This works by granting the LLM access to the web search tool in its configuration options on the OpenAI end.

What's sent to OpenAI? Analyze sends the data the chart is representing as well as the user’s request.

Chat

Chat is similar to ChatGPT within One Model in that it’s a framework for asking questions or making requests to a LLM (OpenAI’s latest GPT-5 model). One Model currently does not allow for file uploads to Chat. The LLM generates the entire response.

Optional web search capability is also available for Chat. As with Analyze, web search magnifies the power of Chat with complete up-to-date information and sources cited with hyperlinks. This works by granting the LLM access to the web search tool in its configuration options on the OpenAI end. The LLM can be granted access to the entire web or only the https://help.onemodel.co/ domain, depending on which permission is granted.

What's sent to OpenAI? The user’s prompt or question is all that is sent to OpenAI for Chat.

 

Was this article helpful?

0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.