Newsroom


Main

Login at Reception to get more menu items ...

Back to the Newsroom

Are AI Helpers actually Helpers or Misleading Us?

Hans-Peter Plag

Published in Library on Jun. 19, 2025.

AI helpers are increasingly used in tools like WhatsApp. They make serious mistakes and try to defend them when discovered! We are on a slippery slope to people's perception increasingly biased by AI errors or misleading.

Share

Generative AI tools are increasingly used by people to get answers to questions or to generate text, graphics, and source code. Tools like WhatsApp, Acrobat reader, Zoom use AI-based helpers to support the users. However, these “helpers” make mistakes. Not all of those mistakes are easily discovered by the users.

In an article published in The Guardian, Robert Booth reports on a case in which WhatsApp revealed the private number of another WhatsApp user in a response to a request for a phone number of the U.K. railway service. When the user informed the AI helper of the mistake, in a long dialog the AI helper made many misleading excuses and several additional mistakes.

There are an increasing number of similar examples. This raises the question to what extent the perception of people will be increasingly impacted by mistakes made by, or intentional misinformation provided by AI helpers.

The Transformation Community recently organized a workshop “Can AI Build Trust and Hold Space?” (A recording is availabl here. The AI tool considered was Harmonica. Highlights were provided in an email:

  • Surprise hit: Harmonica responded in Spanish on the spot—its real-time flexibility caught participants’ attention.
  • Trust gap: When asked if AI can build trust, the tool offered a summary rather than analysis—reminding us that emotional depth is still a human strength.
  • Ethical tensions: Participants raised clear concerns about AI reinforcing dominant narratives or making invisible ethical decisions.
  • Insight > answers: AI was most useful as a sparring partner—not a definitive source, but a tool to think with.
  • Stay in the driver’s seat: Participants saw that the potential for tools like Harmonica lies in how we shape it—customizing prompts, questioning assumptions, and co-owning the process.

The sounding board commented:
Silva Ferretti reminded us that AI isn’t neutral. She challenged us to center process over outcomes and use AI as a lens for dialogue, not answers. Scott Chaplowe urged us to measure impact through values — are we deepening relationships, broadening inclusion, and surfacing real dialogue?

The above indicates that there are many concerns among transformation experts that AI may lead to an uncritical use that could easily increase biases and biased perceptions.