AI Risks Surge as Quantum Threats Lurk, Warns 2025 Thales Data Threat Report

Thales

Insider Brief

  • The 2025 Thales Data Threat Report finds that rapid generative AI adoption is outpacing security safeguards, with data integrity and trust emerging as top concerns.
  • Nearly 70% of respondents cited the fast-moving AI ecosystem as their greatest security risk, but only 14% rated AI-specific tools as effective.
  • While not yet imminent, quantum computing threats are advancing, with a 5,000-qubit system used to decrypt a 50-bit RSA key and post-quantum cryptography preparations now underway.

The 2025 Thales Data Threat Report finds that the rise of generative AI is reshaping enterprise security priorities, with data integrity and trust risks escalating alongside rapid adoption. Quantum computing, while not yet disruptive, is inching closer to compromising current encryption standards.

Nearly 70% of surveyed organizations named the fast-changing generative AI ecosystem as their top security concern, yet most report little improvement in protective controls or policy enforcement.

One-third of enterprises are now in what the report calls the “integration” or “transformation” stages of adopting generative AI. That shift is fueling pressure to deploy quickly — even if organizations aren’t ready. The study found no clear improvement in compliance, data classification, or encryption among those already integrating AI, suggesting that adoption is outpacing risk management.

Integrity, Trust, and the Limits of Current Tools

Security threats tied to generative AI are growing more complex. Large language models are vulnerable to adversarial inputs, data poisoning, and bias manipulation. Unlike breaches that target availability or confidentiality, these risks threaten the integrity of data and the trustworthiness of AI-generated output. According to the report, these types of attacks were named as the second and third greatest concerns after the AI ecosystem’s pace of change.

Despite the urgency, tools designed specifically for AI security remain poorly rated. While GenAI security ranked second in spending priority — just behind cloud security — only 14% of respondents rated these tools as effective. Most organizations still rely on fragmented systems: two-thirds use five or more data classification tools, and more than half operate multiple encryption key managers, complicating enforcement and raising the risk of gaps.

Many organizations are also unclear about how their data is used by AI systems. The popularity of SaaS platforms embedding generative AI features adds opacity to data flows and provenance. This raises compliance and privacy issues, especially when sensitive data ends up in public large language models without oversight. The report warns that AI agents drawing on improperly protected enterprise data could introduce brand risk, regulatory exposure, or systemic misinformation.

Quantum Threats Edge Closer to Reality

AI security concerns are surfacing just as the industry confronts longer-term threats from quantum computing.

While quantum threats are not yet imminent, the report notes a major shift in 2024 when a 5,000-qubit quantum computer was used to break a 50-bit RSA key. Although classical systems can also decrypt keys of that size, the demonstration highlights the trajectory toward “Q-Day” — the moment quantum systems become capable of breaking widely used encryption standards.

The study, sponsored by Thales and based on responses from more than 3,100 IT and security professionals across 20 countries, finds that more than 60% of respondents are concerned about future decryption of today’s data, secure key distribution, and encryption algorithm compromise. Each issue was flagged as a top-tier concern in ranked choice responses.

Post-Quantum Planning Faces Infrastructure Hurdles

In response, enterprises are prototyping or evaluating post-quantum cryptographic (PQC) solutions, designed to resist quantum-enabled decryption. Still, only about a third of organizations are confident in relying on cloud or telecom providers for this transition, indicating a preference for direct oversight of their cryptographic infrastructure.

The Thales report also cites technical and organizational hurdles in shifting to PQC. Transitioning away from RSA and ECC standards will require sweeping changes across software libraries, hardware modules, and network protocols. Although the U.S. National Institute of Standards and Technology released its PQC transition guide in 2024, implementation at scale remains slow.

Experts cited in the report suggest that enterprises must prepare now for dual pressures: defending against immediate AI risks while laying the foundation for a secure, post-quantum future. That means unifying fragmented systems, investing in trustworthy data pipelines, and adopting crypto-agile infrastructures that can adapt as threats evolve.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape