Close this search box.

AI Soars Past Human Abilities, But Costs & Risks Soar Too, Says Stanford University Report

Screenshot (1786)

AI Soars Past Human Abilities, But Costs & Risks Soar Too, Says Stanford University Report

The Artificial Intelligence Index Report 2024 from Stanford University highlights the rapid recent progress of AI systems like ChatGPT in matching or exceeding human performance on tasks like reading comprehension, image recognition and advanced mathematics. However, this blazing pace of advancement is quickly rendering many AI benchmarks and evaluation tests obsolete after just a few years.

The report notes AI is being applied to an increasing number of scientific domains, including materials discovery and weather forecasting projects at DeepMind. Overall, the AI boom enabled by neural networks and machine learning has seen explosive growth this past decade in code repositories, research publications, and notable AI model releases — especially from industry.

Academic researchers are now focused on probing the remaining weaknesses of these models through new challenging tests like the GPQA benchmark for reasoning abilities. The latest AI systems like Anthropic’s Claude are already scoring near human levels on these tests after just a year.

This performance leap for AI has come at a massive cost, with training expenses for models like GPT-4 reaching into the hundreds of millions of dollars due to the need for ever-larger training datasets. Concerns are rising about the energy use and environmental impact.

There are also growing concerns around the responsible development and use of AI as regulatory interest surges, especially in the US. However, a lack of standardized evaluation frameworks makes it difficult to consistently assess the potential risks posed by different AI models.

The report underscores both the historic achievements of modern AI as well as the mounting challenges around benchmarking, environmental impact, ethics, and governance that will need to be addressed.