Poor data quality blocks AI success

The Ataccama Data Trust Report 2025 identifies poor data quality as a critical obstacle to AI adoption.

Despite AI's transformative potential, its success depends on trusted, reliable data. 68% of Chief Data Officers (CDOs) cite data quality as their top challenge, with only 33% of organisations making meaningful progress in AI adoption.

Conducted by Hanover Research with insights from 300 senior data leaders, the report underscores the urgency of addressing systemic issues like fragmented systems and governance gaps. Without resolution, businesses risk stalled innovation, wasted resources, and diminished returns on AI investments.

Key findings from the report

41% of organisations struggle to maintain consistent data quality, directly hindering AI outcomes.

Knowledge gaps around data trust and governance slow progress; education is critical to closing these gaps.

Trusted data drives AI success: High-quality data accelerates decision-making, enhances customer experiences, and delivers competitive advantages.

Policy implications: Aligning data trust with the UK’s AI leadership goals

As the UK accelerates its AI strategy with the newly unveiled AI Opportunities Action Plan, the report highlights a foundational gap organisations must address: data trust. When data is accurate, reliable, and trustworthy, users can be confident in making informed decisions that drive improved outcomes and reduce risk.

National standards for data quality: The report emphasises the need for unified benchmarks to guide businesses in building AI-ready ecosystems. Creating a National Data Library is a core goal within the UK plan for homegrown AI and regulatory principles—safety, transparency, and fairness—could be operationalised through national data governance benchmarks. These standards would ensure clear compliance guidelines while supporting the UK’s pro-innovation regulatory goals.

Infrastructure modernisation: Legacy systems remain a bottleneck to AI scalability, unable to handle real-time, high-volume data demands. With the commitment to sufficient, secure, and sustainable infrastructure, the UK’s investment in supercomputing and AI growth zones enables continuous data quality monitoring and governance. These advancements create scalable, efficient systems tailored to advanced AI technologies.

Data trust in AI regulation: Embedding governance and automated validation practices into data workflows is crucial for compliance, reliability, and long-term growth. Aligning the UK’s ethical AI initiatives with data trust requirements would ensure AI systems both operate reliably and adhere to safety and transparency principles.

“The report makes one thing clear: enterprise AI initiatives rely on a foundation of trusted data,” said Jay Limburn, Chief Product Officer at Ataccama. “Without addressing systemic data quality challenges, organisations risk stalling progress. The UK’s approach to AI regulation shows how aligning data trust principles with national standards and infrastructure modernisation can deliver tangible results.”

Looking ahead: Data trust as the foundation of global AI leadership

The UK’s regulatory progress presents an opportunity to lead in AI innovation. However, even the most ambitious policies risk falling short without prioritising data trust. The Ataccama Data Trust Report 2025 offers a roadmap to embed data trust into the UK’s AI agenda, ensuring ethical, effective initiatives that drive measurable outcomes, including increased AI adoption, enhanced compliance, and competitive advantages.

73% are investing in AI-specific security tools with either new or existing budgets.
Public sector organizations recognize the potential of AI for enhancing decision making, improving...
AAIA allows experienced auditors to demonstrate their knowledge on AI governance, risk, operations...
Dynatrace to provide full-stack observability for NVIDIA Blackwell reference design unveiled at...
Powerful AI infrastructure and solutions, backed by a broad partner ecosystem and global services,...
Unveiled at the RSAC™ Conference, the 2025 LevelBlue Futures Report finds only 29% of executives...
55% of businesses admit wrong decisions in making employees redundant when bringing AI into the...
Observability programme maturity is uneven across data quality, data pipelines, and AI/ML models,...