Prediction #BBB380E4 Completed Advanced

What percentage in the US will trust AI by the year 2030, according to the Edelman Trust Barometer?

Confidence high Model's confidence in this forecast
Probability 65%
The Question
"What percentage in the US will trust AI by the year 2030, according to the Edelman Trust Barometer?"
Advanced prediction

The author used Advanced mode to provide extra direction to the forecasting pipeline.

Additional context provided

the edelman trust barometer is an annual global survey conducted by the global communications firm edelman that measures public trust in institutions such as government, business, ngos, and media. in recent years, edelman has also included specific trust metrics related to technology, including artificial intelligence.
participants are asked to rate how much they trust ai in a 9-point scale, with the participants who pick any of the top 4 options being considered as trusting ai. in 2024, 50% of participants trusted ai globally, with a minimal reduction to 49% in 2025. in the us that number was lower in 2025, at 32%.

This context was supplied by the author and influenced the forecast. Inaccurate context can affect prediction accuracy.

The Forecast

The Trust Recovery: Predicting US AI Trust Will Reach 42% by 2030

By 2030, public trust in artificial intelligence within the United States is projected to increase from 32% in 2025 to approximately 42%, based on data from the Edelman Trust Barometer. This forecast reflects a moderate but significant recovery in trust, driven not by an abrupt embrace of AI but by a normalization of its role in daily life and work. The projection balances ongoing concerns—such as fears about job displacement and the problematic nature of deepfakes—against growing familiarity, evolving AI applications, and emerging regulatory safeguards.

The current level of trust in AI in the US is notably low. As of 2025, only about 32% of the American public expressed trust in AI technologies, which stands well below the global average hovering around 49-50%. This trust gap is influenced by factors including skepticism towards the organizations developing AI systems and widespread worries about the damaging effects of deceptive AI-generated content like deepfakes. The erosion of confidence in the companies producing AI has been evident, with trust in AI companies dropping from around 50% to 35% over five years. Consequently, this skepticism contributes strongly to the public's hesitation to accept AI broadly.

Despite these challenges, familiarity emerges as a key driver of increasing trust. Hands-on experience with AI correlates with significantly higher levels of confidence. As AI transitions from a simple novelty to a tool embedded in everyday workflows—acting, for example, as an 'agent' managing digital tasks or as a 'lab assistant' in scientific research—users develop a more practical understanding of its benefits. This shift toward viewing AI as a utility rather than a threat is expected to contribute substantially to trust gains. Milestones such as widespread adoption of AI protocols and notable contributions to areas like drug discovery illustrate this growing utility and legitimacy.

Conversely, economic anxieties pose a major ongoing impediment to trust. Projections suggest that AI could potentially automate up to 30% of work hours in the US economy by 2030, fostering widespread fears of job loss and displacement. Although some estimates are more moderate and historical trends show gradual adoption of technology with manageable labor market impacts, the perception of risk remains a formidable barrier. Public concern that AI primarily serves shareholder interests at the expense of workers continues to weigh heavily on trust levels.

Another critical factor influencing the trajectory of AI trust is regulation. Emerging frameworks such as the European Union's AI Act introduce binding rules and accountability mechanisms that set benchmarks for ethical AI governance. Although originating in Europe, these regulations often influence global standards. The expectation is that by 2030, the US will implement comparable regulatory measures addressing ethical design, user control, and risk management. Such governance provides an essential sense of integrity and oversight that reassures the public and helps foster a baseline of trust.

Looking at different possible futures, the primary scenario anticipates trust stabilizing in a range between 40% and 45% by 2030, with a 65% probability. This scenario envisions AI becoming an inconspicuous and useful infrastructure underpinned by effective regulation. There is a 25% chance that trust remains stagnant near current levels if deepfake issues and economic fears continue unabated. Meanwhile, a 10% probability exists for trust to collapse below 25% should a major AI-related crisis arise, such as a large-scale cyberattack or systemic shock.

In conclusion, the forecasted increase from 32% to 42% trust in AI by 2030 does not imply widespread enthusiasm but rather a cautious transition towards acceptance. This evolution marks a shift from fearing AI as an inscrutable magic trick to recognizing it as a powerful but manageable tool. As the technology matures and embeds itself in science, work, and daily life—while society puts guardrails and safeguards in place—the American public is expected to develop a more functional and stable trust in AI systems over the coming decade.

Do you agree with this prediction?

Log in to weigh in.

Share this prediction

Spread the forecast