Transparency in AI Use: Trust, Expertise, and the Hidden Costs

Renée Brecht-Mangiafico, PMP April 2026
Part 2 of 3 in AI, Teams & Decision-Making.

In my PL-300 training, transparency came up as a core tenet of ethical AI design. That stuck with me—because if transparency matters when we build AI systems, it likely matters just as much in how we use them day to day.

Transparency Isn't Just Technical

Most public discussion focuses on large-scale risks:

  • Bots flooding social media with misinformation
  • AI-generated content shaping narratives at scale
  • Bias in automated systems
  • Data privacy concerns
  • The potential for political or global disruption

Those are real concerns. But there is a quieter issue emerging inside everyday workplaces:

AI can blur the line between support and self-representation.

When Output Does Not Equal Understanding

If someone uses AI to draft content, structure ideas, generate insights, or produce a polished response, at what point does that become their work? And what happens if that process is not visible?

This is not always about wrongdoing. Sometimes it is about ambiguity. Sometimes it is about norms that have not caught up to the technology. But ambiguity still has consequences.

Research on trust in organizations, including work by Roy Lewicki and Barbara Bunker, has explored how trust develops, shifts, and can be damaged over time. One practical takeaway is that trust is not only based on outcomes. It is also shaped by perceptions of honesty, reliability, competence, and intent.

That matters here. When someone presents heavily AI-assisted work as entirely their own thinking, the reaction is not always dramatic. It is often subtle. You may begin to wonder:

  • Can they explain the reasoning behind it?
  • Can they adapt it when the situation changes?
  • Can they defend it under pressure?
  • Does this output reflect their actual judgment?

Over time, that uncertainty becomes something more concrete: a shift in trust. A quiet loss of confidence. Not necessarily because AI was used, but because it was not acknowledged.

The SME Problem

In project environments, we rely heavily on SMEs—subject matter experts. We depend on them to guide decisions, flag risks, provide context, and bring depth where the rest of the team may not have it.

AI introduces a subtle risk: the appearance of expertise without the depth of expertise.

AI can help someone produce work that is well-structured, articulate, and persuasive. But that does not always mean the person has deep understanding, practical judgment, or the ability to adapt when something goes off-script. Eventually, that gap shows up—when decisions need to be made under pressure, when nuance matters, when someone asks a follow-up question that requires more than a polished answer.

And when that happens, it affects decision quality, team confidence, risk assessment, and project outcomes—along with the credibility of the person who appeared to be the expert.

This does not mean AI cannot support SMEs. It absolutely can. AI can help experts move faster, organize knowledge, test assumptions, and communicate more clearly. But AI can also make non-expertise look like expertise if there is no transparency around how the work was created.

The Learning Gap

AI can accelerate output while shortcutting learning.

AI is actually an excellent teacher. It can explain, reframe, simplify, compare, and generate examples. It can help someone learn faster than they might have on their own. But only if they engage with it that way. If the workflow becomes copy → paste → move on, we skip the part where real understanding is built.

We skip the sitting with the reasoning. The questioning. The connecting. The moment where something finally clicks because we wrestled with it. Research on automation and cognition has long raised concerns about over-reliance on automated systems—that when people rely too heavily on automation, they may lose the underlying skills and judgment needed to evaluate what the system produces. The same concern applies here. If AI does too much of the heavy lifting without reflection, the work may look strong on the surface, but the learning underneath may not have happened. That is not just a performance issue. It is a development issue.

Trade-Offs We Are Not Naming

There are real benefits to AI-assisted work. It produces faster output, cleaner structure, and a higher baseline quality. It supports people who struggle with writing or organization, and it improves access to examples and explanations. Those benefits are genuine and worth naming.

But there are also real risks. Authorship becomes blurry. Transparency erodes. Perceptions of expertise can become misaligned with actual capability. Trust can quietly shift. And capability gaps may only become visible much later, when the stakes are higher. We do not yet have shared norms for this. Is AI like spellcheck? Like a research assistant? Like a collaborator? The answer may depend on the context—but in team environments, especially in project work, context matters. Trust matters. Expertise matters. Accountability matters.

The Real Question

This is not about avoiding AI. It is about using it well. The questions worth sitting with are about ownership, expertise, acknowledgment, depth, and whether AI is supporting learning or replacing it.

Because trust is not just about results.

Trust is also about understanding how those results were created. And as AI becomes more embedded in our work, transparency may become one of the most important foundations of ethical use.

Inspired by: How to use AI to strengthen teams instead of destroying them — Aytekin Tank, Fast Company, April 2026.