General & Strategic
Q: What are the advantages of using an Enterprise AI tool (like ChatGPT, Claude or Microsoft Copilot)?
The goal is to support data driven decision making in companies and in this context, providing data to decision makers in the flow of work. We want to bring high-quality, governed people data directly into the tools managers use every day, rather than forcing them to leave their workspace to get answers to questions about their people.
You don’t choose between them; One Model is the intelligence engine that makes Enterprise AI possible. It handles the complex data wrangling, security, and standardization so that when a manager uses an Enterprise AI as their ‘daily driver’, the insights are accurate, safe, and speak your company’s specific language. The benefits of this approach:
- Stop 'Context Switching': It takes 23 minutes to get back into a 'flow' after switching tasks. Asking Copilot for a turnover rate while drafting a budget in Excel keeps the manager focused and productive.
- The 'Flow of Conversation': Decisions happen in meetings. Pulling governed data directly into a Teams chat allows for real-time, evidence-based leadership without pausing the momentum.
- Governed Insights, Everywhere: Connecting One Model to your Enterprise AI ensures that 'generic' tools are grounded in a Single Source of Truth. You get the reach of ChatGPT with the security of One Model.
- Lowering the Barrier: Many managers are 'data-shy' with specialized platforms. Asking a familiar tool a simple question in plain English drives much higher data adoption across the leadership team.
Q: Will the numbers be different between my Enterprise AI and One Model?
No. The underlying data remains identical.
The AI does not guess your metrics. While the Enterprise AI (like Copilot or ChatGPT) provides the interface, the actual numbers are fetched directly from One Model via the MCP Server. You are accessing the same verified, governed results that live within your One Model platform.
It is important to note that because the final output is composed by a Large Language Model (LLM), there is always a small risk of translation errors or hallucinations during the final delivery. Every response includes a standardized disclaimer: AI-generated content may be present. Check for accuracy. We always recommend cross-referencing critical figures with your One Model platform.
Q: How does the One AI Assistant fit into this picture?
One Model’s One AI Assistant is actually at the heart of the Enterprise AI connection with One Model. When you ask a question of One Model from your Enterprise AI it is actually asking that question of the One AI Assistant, which is in turn assembling responses from the One Model platform.
Q: Are the insights generated the same as those in the One Model UI?
They won’t be identical and this gets to the heart of how Generative AI works. It is also important to distinguish between the Data and the Interpretation (Insights):
- The Data: This comes directly from One Model. It is verified, governed, and acts as the Single Source of Truth.
- The Insights: These are generated by the Enterprise AI (e.g. ChatGPT). Because the AI is interpreting the data on the fly, the insights are conversational and can be blended with other company context. These interpretations are unique to the AI's context and will differ from the deep-dive analytics found within the One Model app. This does provide you with a great opportunity to wrap One Model data in your own Agents and company context.
Security and Permissions
Q: What is MCP and how can we test it?
MCP (Model Context Protocol) is the standardized bridge that allows an AI (the Host) to use One Model’s features (the Tools). Think of it like a USB port for AI: we build one MCP Server, and any compatible tool (ChatGPT, Claude, etc.) can plug into it to access your data securely.
Q: How does One Model ensure data security and permissions are respected within the Enterprise AI tool?
Security is enforced through a Multi-Layered Governance approach. This ensures that the AI only sees what the Admin allows, and the User only sees what their role permits.
- Role-Based Security (The User’s Access): One Model’s core Role-Based Security (RBS) is the first and most important line of defense.
- Application Access: The user must have explicit permission to access One Model data via an external tool (MCP).
- Data Access: The system enforces your existing Data Access Roles (Metric, Dimension, and Column permissions) and Security Rules (Row-level/Population access). If a manager can't see a specific department's salaries in the One Model UI, they can't see them in the AI either.
- Vector Configuration (The AI’s Scope): This is a second layer of security specific to the AI Assistant. The MCP server only knows about the metrics and dimensions your Admins have enabled in the One AI Configuration.
- If a field is excluded here to protect sensitive data or prevent AI hallucination by overloading it with irrelevant fields, it remains completely invisible to the Enterprise AI tool.
The Result: If a user asks for data they aren't permitted to see, or data that hasn't been configured for the AI, the MCP server simply will not return it.
Q: How do we audit user activity and track what is being asked (Prompts)?
One Model provides two distinct layers of oversight: ‘System Audit Logs’ for compliance and ‘Usage Statistics’ for behavioral analysis.
- System & Security Audit Logs
These logs track administrative and security-related changes to the platform's configuration. Admins with the necessary permissions can export these as daily JSON log batches to monitor the integrity of the AI connection.
- Integrations: Logs when a user adds, edits or deletes an Integration Rule for the AI.
- User Consent: Logs every instance of a user hitting the consent page, including whether they Accepted or Rejected the connection between AI Tool and One Model.
- Access Governance: Tracks any changes made to the roles and permissions that authorize MCP or Enterprise AI.
- Refer to the Audit Log Guide for a full list of tracked security events.
- Usage Statistics
Usage Statistics capture system-triggered events in near real-time (updating approximately every 10 minutes). These metrics allow you to monitor adoption and analyze user intent to understand exactly how the AI is being utilized across your organization.
- Refer to the Usage Statistics Article for a comprehensive list of all event triggers.
- Enterprise AI usage events are currently in development and will be automatically integrated into your reporting suite as they become available.
Q: Will our data be used by One Model to train public models (e.g. ChatGPT)?
No. Just like the use of the One AI Assistant, when you connect via MCP, your data is used by One Model as context to answer your specific question. One Model uses enterprise-grade protocols that ensure your data is not ingested into public Large Language Model (LLM) training sets. Your organizational data remains your own and is never used to improve the public AI for other companies.
Operational & Implementation
Q: How does the workflow actually work? (From Chat to Data)
When you ask a question in an Enterprise AI tool (like Copilot), the MCP Server acts as a secure bridge to One Model. It uses an Orchestration process to determine the most accurate way to retrieve your data.
- Path A: Preconfigured Answers
If your question matches a specific Answer or Storyboard (in development) your Admins have already built:
- The Match: The system uses 'embeddings' (mathematical representations of meaning) to recognize that your question perfectly matches a verified Storyboard Tile.
- The Result: It pulls the data directly from that pre-set Storyboard.
- Path B: Generative Querying
If you ask a unique or complex question that doesn't have a preconfigured answer:
- Translation: The system translates your 'human' words into technical metrics and filters (e.g. recognizing that 'people who left' means the Terminations metric).
- Query Generation: It writes a precise, one-time query to pull that specific data from the database.
- Execution: The query runs against the One Model database, ensuring the data is governed and secure.
- The Final Step: The Conversational Response
Once the raw data is retrieved through the MCP bridge, your Enterprise AI tool formats it into a clear, conversational response. It presents the verified numbers and adds business context or relevant insights to help you interpret the results.
Q: Are there usage limits or extra costs for querying via an Enterprise AI tool?
One Model does not currently charge a per-query fee for MCP access, however, use of the tool is subject to your overall contract terms. Additionally, you must have the appropriate enterprise licensing for your AI tool (e.g., ChatGPT Enterprise).
Comments
0 comments
Please sign in to leave a comment.