Every AI vendor’s marketing site has some version of: “We take your data security seriously.” It is so universal that buyers skim past it. The phrase has become unfalsifiable. Vendors who train on customer data and vendors who do not both use it.
Underneath the marketing, practices vary enormously. Some vendors train on customer data by default. Some opt out. Some opt in. Some make commitments they could reverse at any time. Some make commitments that are structurally durable. The differences are huge, but they are invisible to most buyers because everyone uses the same language.
This article is about how to actually evaluate AI vendor trust posture, beyond the marketing copy.
The four real questions
When evaluating an AI vendor’s trust posture, four specific questions reveal more than any marketing page.
Question 1: Do you train on customer data?
The acceptable answers are:
- “Never, structurally. The training pipeline does not exist.”
- “Only with explicit opt in, on a per customer basis.”
Unacceptable answers, all common:
- “We may use customer data to improve our services.” Translation: yes, we train.
- “We take privacy seriously.” Translation: this is not an answer to your question.
- “Only with appropriate safeguards.” Translation: yes, we train, but we phrased it carefully.
The vendor’s answer to this question, in the actual contract language they would sign, is the only reliable signal.
Question 2: Does your system surface individual productivity metrics?
Some AI tools build “team analytics” or “productivity insights” features that measure individual user behavior. “Sarah sent 47 messages this week.” “Mike’s response time has slowed.” These features sound innocuous but they enable surveillance use cases that destroy team trust the moment users discover them.
The acceptable answer: “No, we do not surface individual productivity data. By design.”
The unacceptable answer: “We provide team analytics that managers can use to...” Whatever follows is surveillance dressed in productivity language.
Microsoft’s 2020 Productivity Score scandal still affects that brand. Vendors building similar capabilities today are setting up the same kind of scandal. Buyers who want this kind of feature should buy from those vendors. Buyers who do not should ask the question and listen to the answer.
Question 3: How does permission inheritance work?
When a user asks the AI a question, what determines what content the AI can show them? The right answer: the user’s existing permissions in source systems. If they can see a Slack channel in Slack, the AI can use that content. If they cannot, the AI cannot.
The wrong answer involves words like “smart permissions” or “intelligent access” or “expanded visibility.” These phrases describe permission expansion: the AI tool shows users content they do not have access to in source systems, on the theory that the AI knows better than the source system about what is relevant. This is a security gap waiting to be discovered.
Permission expansion sounds helpful and it produces better looking demos. It also produces the worst kind of customer escalation: a user finds out the AI showed them content they should not have seen, files an incident, and the security team gets involved.
Question 4: What does your audit log capture?
Every action an AI tool takes on your team’s behalf should be logged. Every retrieval, every Skill invocation, every agent action, every connector sync. The audit log is how you detect when something has gone wrong.
The right answer: “Every action is logged with full traceability. Logs are retained for X years.”
The wrong answer: “We log key events.” This means most things are not logged, and the things that are not logged are exactly where problems hide.
For enterprise buyers, audit logging is non negotiable. For mid market and SMB buyers, it should be treated as non negotiable even if your security team is not formally requiring it. The cost of weak audit logging shows up later, when you need to investigate something that did go wrong.
- Question 01Do you train on customer data?Good: No, structural, not policy. Deflection: aggregated, anonymized, opt out.
- Question 02Do you track individual productivity?Good: no, we report team level only. Deflection: for admin insights or coaching.
- Question 03How are permissions inherited?Good: source of truth, no admin override. Deflection: workspace managed, override possible.
- Question 04What does the audit log capture?Good: every query, retrieval, export. Deflection: selected events or on request.
What structural commitment looks like
The deeper question, beyond these four, is whether the vendor’s commitments are structural or merely policy. We covered this distinction in detail in the no training cornerstone. The short version: a policy commitment can be reversed by updating terms of service. A structural commitment would require rebuilding the product.
Ask any vendor: “What would have to happen at your company for you to start training on customer data tomorrow?”
If the answer is “we would update our terms of service,” the commitment is policy.
If the answer is “we would have to build infrastructure that does not currently exist,” the commitment is structural.
Structural commitments are worth more because they are durable across changes in leadership, business pressure, and competitive dynamics. Policy commitments last only as long as the current policy.
The strategic point
The AI tool category is going to be defined over the next five years by which vendors are trustworthy with team data. The first major scandals will probably break in 2026 or 2027, and they will involve vendors who trained on customer data, surfaced productivity metrics, or expanded permissions in ways customers found out about.
The vendors who built around structural commitments from the beginning will be the long term winners. The vendors who built around marketing claims will keep operating until they encounter the first scandal, at which point trust collapses.
If you are evaluating AI tools today, the trust posture matters more than most buyers realize. Ask the four questions above. Listen carefully to the answers. Avoid the vendors whose answers are deflections.
We have built Pulse around structural commitments specifically because we expect this distinction to matter increasingly over time. The detailed manifesto is linked from /manifesto, and the technical architecture is designed to make these commitments durable rather than promotional.