New Report from The Alan Turing Institute Provides Guidance for Security Services on Selecting Trustworthy Industry AI Suppliers

The Alan Turing Institute, a leader in AI research and application, has released a report addressing the significant potential of AI for public sector use, particularly in defense and national security. The report acknowledges the challenges national security bodies face in adopting AI, such as the need to process vast amounts of data and the potential uses of AI in areas like cybersecurity and content monitoring. However, developing state-of-the-art AI systems in-house is often prohibitively expensive, as demonstrated by the UK government’s Frontier AI Taskforce, which would need a budget ten times its initial funding of £100 million to match the likes of Google’s DeepMind.

Recognizing the resource constraints, national security agencies are increasingly looking towards industry partnerships. Leaders like GCHQ director Jeremy Fleming and MI6 chief Richard Moore have admitted the necessity of leaning on industry for technological advancements. Yet, this reliance raises concerns about the trustworthiness and effectiveness of industry-designed AI systems, especially in high-stakes security contexts. Questions about vulnerability to attacks, biases, privacy infringement, and the “black-box” nature of these systems are paramount.

The report also touches on the controversies that have arisen from public-private AI partnerships, such as exaggerated product capabilities or misuse of sensitive data. These concerns underscore the importance of transparency and rigorous evaluation in such partnerships.

“If national security bodies can identify potential issues with industry-designed AI before it is too late, they will be well-positioned to harness privately developed AI systems for public good — whether through more efficient administration that saves taxpayers’ money, autonomous defence systems that can better protect critical national infrastructure from cyberattacks, or better and faster predictions that identify public safety risks earlier.”

— The Alan Turing Institute, Assurance of Third-Party AI Systems for UK National Security report

To address these challenges, The Alan Turing Institute’s report proposes an assurance framework for national security bodies to assess AI industry partners. This includes a structured “system card” to document an AI system’s ethical, legal, performance, and security aspects, demanding greater transparency in contract negotiations, and investing in internal skills for thorough evidence review and risk identification.

By adopting these measures, national security bodies can better navigate the risks of industry-designed AI. This will enable them to utilize privately developed AI systems effectively and safely, potentially leading to improved public safety, efficient administration, and enhanced national security.

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape