Trust is quickly becoming a valuable currency in the age of artificial intelligence (AI). AI is reshaping how everyone, from the largest multinational to the average citizen, work. Now the question is no longer whether people will use these technologies, but whether they will trust them enough to let them scale. New global data shows why that question matters more than ever.
According to the 2025 KPMG Trust, Attitudes and Use of AI report, only 39 percent of people in advanced economies say they trust AI systems, compared with 57 percent in emerging economies. Sixty-five percent believe AI can deliver technically accurate outputs, but only 52 percent believe these systems are safe, fair, or ethically sound. This shows people trust in the competence of AI, but trust in these systems lags far behind. In Finland, for example, half of respondents acknowledge AI provides useful services, but only a third believe these systems are safe to use. This is a significant gap. Research shows trust is strongly correlated with acceptance of AI and people’s willingness to rely on these systems. If trust falters, adoption stalls, no matter how good the technology becomes.
There are obvious reasons why people are hesitant to trust AI. These systems remain largely opaque, leaving users unsure how they work, how they make decisions, or why they behave unpredictably. A chatbot that hallucinates a false answer or an advertising algorithm that subtly excludes an entire demographic does more than cause an error, it erodes confidence in the entire ecosystem.
Bias is another problem. Algorithmic discrimination in hiring, lending, insurance, or even medical triage has shown what happens when models are trained on poor or incomplete data. Security also plays a role. The public began to internalize the risks of AI-generated fraud, manipulation, and impersonation after 2024 became known as “The Year of the Deepfake”. Deloitte projects that AI-enabled fraud could reach 40 billion dollars in the United States by 2027. As AI and autonomous agents proliferate across the workplace, supply chains, and digital commerce, the fear of devastating disruption is growing.
In this environment, transparency, fairness, and accountability are no longer optional. They are the conditions that determine whether AI can scale safely and sustainably.
Why trust must be engineered
AI is everywhere, for example employee tools, cloud platforms, and fully autonomous agents that operate without oversight. Even if a company does not deliberately deploy AI, it is embedded in the software it buys and the services it relies on. The result is a buildup of AI “tech debt” as unvetted models, opaque algorithms, and siloed experiments that accumulate across systems. This makes accountability the real differentiator. McKinsey’s State of AI report shows that companies that invest in responsible AI, i.e., data governance, fairness reviews, model documentation, and risk management, see measurable benefits, from stronger trust to fewer negative incidents.
To scale AI responsibly, businesses must adopt what can be called a trust stack, a layered approach that builds trust into the system from the ground up. The foundation is strong governance: Clearly defined policies outlining what AI can and cannot do, paired with legal, compliance and cross-functional oversight. On top of that sits transparency, including documentation that explains how models behave, when customers are interacting with AI and how decisions are made. Fairness and ethics form another layer, ensuring systems align with social values and do not reinforce existing inequalities. Monitoring tools complete the structure by detecting model drift, flagging anomalies, and tracking unintended consequences. None of this slows innovation. In fact, it accelerates it by preventing costly failures and reducing the risk of public backlash. Building trust also means building the right culture. Even the best-engineered system can be undermined by poor data habits, unvetted tools, or a false sense of security.
Around the world, a growing number of companies are already proving that trust accelerates adoption. TELUS built a human-centric governance system and became the first Canadian company to adopt the Hiroshima AI Process reporting framework. Sage introduced an AI Trust Label, explaining how models work and what safeguards protect small business customers. IBM created AI FactSheets, documenting every model’s purpose, data sources, and risks so the company can stand behind its outputs. These examples show that responsible innovation is not a drag on growth. It is a driver of loyalty, adoption, and long-term value.
Trust as the real strategy
The global economy is becoming agentic, with AI agents expected to make a significant share of business decisions within the next few years. Gartner predicts that by 2028, a third of enterprise applications will include agentic AI, and at least 15 percent of day-to-day work decisions will be made autonomously. As AI’s decision-making power expands, the nature of risk changes. The primary threat becomes disruption inside the intelligent systems that run supply chains, financial operations, customer service, and enterprise infrastructure. If these systems are not governed well, the cost of failure could be enormous.
That is why trust is not an abstract ideal. It is economic strategy. Deloitte research shows that a 10-percentage-point increase in societal trust correlates with a 0.5-percentage-point increase in annual GDP growth. PwC forecasts that AI could raise global GDP by 15 percent within a decade but warns that this outcome is not guaranteed. Without responsible deployment and public confidence, that growth could shrink to as little as 1 percent. Trust frameworks unlock high-quality datasets, collaborative innovation, and the public legitimacy businesses need to use advanced technologies at scale. Without trust, nations risk building isolated AI ecosystems, slowing global progress, and undermining shared prosperity.
The next five years create a critical window. Fraud, manipulation, and deepfakes will grow more sophisticated. Autonomous agents will make more decisions. Biotech applications will expand into healthcare, agriculture, and materials science. For AI and biotech to reach their full potential, transparency, fairness, and accountability must be treated as engineering requirements and not afterthoughts, marketing claims, or compliance exercises.