AI Has a Trust Problem
Trust is fundamental to everything humans do together, and one might argue that trust is a most basic and critical fabric which powers and holds businesses together. Agreements, reports, deals, intellectual property, communications, verbal and digital engagement, and research findings. All things business are built on trust. Trust in sources, trust in people, trust in legal documentation. Data, processes, language, standards, protocols. There are so many areas where trust is paramount to understanding risk, opportunity, and execution.
Enter AI. The future of AI is at a crossroads as it faces a growing issue of trust in how it produces, shows, and delivers business value. There are growing concerns around the reliability of information generated by AI and how other IT systems, applications, and users are deploying this derived knowledge. Business leaders now have more reasons for uncertainty and a greater need to have much-needed mechanisms for determining risk in the use of AI-powered solutions.
One of the most alarming issues is AI-produced hallucinations: the tendency of some models to fabricate research, misattribute sources, or blend unrelated content. These errors not only disrupt workflows but also undermine confidence in AI’s ability to support credible decision-making. Without trust, AI cannot be a reliable partner in research or strategic analysis.
This trust deficit is especially urgent for enterprise businesses. Large Language Models or LLMs are ubiquitous, yet to ensure credible domain and role-based value, companies must also integrate proprietary data securely and maintain control of this integration. This very mix of public and private information can be a cauldron of potential uncertainty and a compound of growing concerns around AI solutions. As AI reshapes how clients perceive and interact with data, businesses must unlock its potential while safeguarding integrity and trust.
When AI Trust Issues Become Reality
Real-world failures, highlighting the urgency of the trust problem, may lead to higher “costs” to enterprise businesses and their bottom line. Trust issues can be exhibited in brand value impacts, corporate reputation degradation, legal liabilities, process inefficiencies, reliance on false or misleading AI-generated results, and more concerns.
Examples of trust issues are growing and include: Recently Deloitte Australia was fined nearly $400,000 for relying on AI-generated reports that contained fabricated information. In October of 2025 several outlets reported on Deloitte’s apparent misuse of AI in producing a consulting report for a contracted effort. AI hallucinations resulted in fabricated references, false quotations, and other errors.
Courtrooms are also experiencing the impact of misinformation, as multiple and growing reports of courts are fining and reprimanding lawyers in their use of generative AI and sources and citations which are hallucinations. In one court, the judge fined a law firm over $30,000 for introducing a document in court which cited non-existent articles and used AI to help write a document consisting of false information.
In the medical field, examples of AI trust concerns revolve around so many facets in this space, including bias in diagnosing patient’s needs, potential patient data misuse, lack of context in conclusions reached in medical situations, AI-generated data interpretations, and so much more.
AI-powered search services are reportedly introducing bias into resulting search results. LLMs are fed enormous amounts of text, and biases can and have formed. Earlier in 2025 xAI apologized for a “malfunction” of Grok, when it posted “wildly racist and antisemitic messages” as reported in an NPR “All Things Considered” report.
Recent announcements by Federal organizations referring to sources or references which are reportedly hallucinations or misinterpreted summarizations are unfortunately contributing to a lack of confidence and transparency in government. Recent government departmental announcements, referencing false sources or misinformed findings produced by AI have called into question the validity of these reports.
Each of these domains, incidents, findings, observations underscore the real-world consequences of trusting AI without safeguards. Trust, at its core is what is produced by fixing what is broken and establishing credibility.
Rebuilding Trust
There are many core issues AI must address to rebuild trust including:
- Transparency Commitments: AI supported and enabled solutions should provide clarity in processes used, identification of bias in information and outcomes.
- Accountability and Accuracy: Delineating representations of accuracy and precision in any AI-produced documents, summaries, findings, research, and communications.
- Liability Understanding: Defining and understanding the impacts of key liability areas such as copyright infringement, discriminatory findings, biases in AI models, and impacts of incorrect or defamatory AI-based conclusions.
- Standards for Trust Verification: Building a balanced mix of processes, guidelines, data, information, and knowledge management policies together with a healthy dose of HITL, human-in-the-loop oversight.
- Reliability Measures: Specific technical definitions and services for managing necessary deterministic models for establishing, maintaining, and auditing user-focused elements, system level measurements, and AI-based outcomes.
Building an AI That People Can Believe In
AI needs to overcome its challenges and issues by actively building trust. These challenges and issues are present in our research. Addressing these challenges and issues and overcoming a growing lack of confidence in AI is what trust is all about. By confronting hallucinations, improving source attribution, and enhancing model reliability, AI can be more reliable, credible, and trustworthy. When we apply methodologies and tools for understanding and managing trust, we can deliver more certainty in a time of uncertainty. Building AI that people can believe in isn’t just a technical challenge—it’s a moral imperative.
About The Author Of This Article
Jim King is CEO and Founder at IndagoAI
Also Read: Building Secure and Ethical AI Practices in Software Development
[To share your insights with us, please write to psen@itechseries.com ]
Comments are closed.