NC State Researchers Expose AI Vision Weaknesses With RisingAttacK Tool

Insider Brief

  • North Carolina State University researchers have developed RisingAttacK, a new adversarial technique that reveals vulnerabilities in leading AI vision systems, with funding from the National Science Foundation and the Army Research Office.
  • RisingAttacK subtly alters key visual features in images to fool computer vision systems without changing their appearance to humans, successfully deceiving top models like ResNet-50, DenseNet-121, ViTB, and DEiT-B.
  • While currently limited to visual models, the technique may impact broader AI domains; it has been released publicly to help researchers test and secure AI systems.

Researchers at North Carolina State University have developed a new technique for deceiving AI vision systems, revealing vulnerabilities that could affect the safety of self-driving cars, medical diagnostics, and security applications. The method, called RisingAttacK, funded by grants from the National Science Foundation the Army Research Office, was tested against the four most widely used computer vision models in artificial intelligence, according to the university.

The study focused on “adversarial attacks,” which involve subtly altering an image so that it fools an AI system while appearing unchanged to a human. For example, a stop sign could be digitally tweaked in a way that makes an autonomous vehicle’s AI fail to recognize it, despite the sign looking normal to a human driver. Researchers pointed out these types of attacks pose significant risks in systems that depend on accurate visual interpretation.

“We wanted to find an effective way of hacking AI vision systems because these vision systems are often used in contexts that can affect human health and safety – from autonomous vehicles to health technologies to security applications,” noted Tianfu Wu, co-corresponding author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. “That means it is very important for these AI systems to be secure. Identifying vulnerabilities is an important step in making these systems secure, since you must identify a vulnerability in order to defend against it.”

How Does It Work?

RisingAttacK works by analyzing all the visual features in an image and determining which ones are most important to the AI’s recognition process. The system then calculates how sensitive the AI model is to changes in these key features. Using this information, it applies extremely small and targeted changes that go unnoticed by humans but drastically alter how the AI interprets the image.

“For example,” said Wu, “if the goal of the attack is to stop the AI from identifying a car, what features in the image are most important for the AI to be able to identify a car in the image?”

The researchers tested the method against four major AI vision systems: ResNet-50, DenseNet-121, ViTB, and DEiT-B. The attack was successful across all models, demonstrating that even highly refined systems can be manipulated with precision adjustments. In tests, RisingAttacK was able to prevent AI models from recognizing common visual targets like cars, pedestrians, bicycles, and traffic signs.

Why It Matters

The implications extend beyond image recognition. While this research focused on AI vision, the team is now investigating whether similar methods can disrupt other types of AI, such as large language models. This could broaden the scope of potential vulnerabilities, highlighting the need for cross-domain safeguards.

Despite its effectiveness, RisingAttacK is intended as a tool for strengthening AI security. By exposing weak points in leading systems, the researchers aim to inform the development of more robust defenses.

RisingAttacK’s current capabilities are limited to systems with visual input and have not yet been validated in audio, sensor-based, or multimodal models. “While we demonstrated RisingAttacK’s ability to manipulate vision models, we are now in the process of determining how effective the technique is at attacking other AI systems, such as large language models,” Wu said.

For More Information

The paper, “Adversarial Perturbations Are Formed by Iteratively Learning Linear Combinations of the Right Singular Vectors of the Adversarial Jacobian,” will be presented July 15 at the International Conference of Machine Learning, being held in Vancouver, Canada. Co-corresponding author of the paper is Thomas Paniagua, a recent Ph.D. graduate from NC State. The paper was co-authored by Chinmay Savadikar, a Ph.D. student at NC State.

This work was done with support from the National Science Foundation under grants 1909644, 2024688 and 2013451; and from the Army Research Office under grants W911NF1810295 and W911NF2210010.

The research team has made RisingAttacK publicly available, so that the research community can use it to test neural networks for vulnerabilities. The program can be found here: https://github.com/ivmcl/ordered-topk-attack.

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape