The Alarming Ethical Blindspot of Deep Learning AI

As artificial intelligence (AI) becomes increasingly woven into our daily lives, a harsh reality is coming into focus — we don’t fully understand how the “amazing technology” of deep learning AI actually works. As Eleanor Manley, a co-Founder at Metta Space, an ML engineer and AI Director, cleverly stated in her TEDx talk recently: “It’s like the AI world is celebrating after finding a golden ticket that they can reap huge rewards from, but they don’t know where they found this ticket in the first place.”

This lack of interpretability or “black box” nature of deep learning models is more than just a technical curiosity. It carries profound ethical implications that we can no longer ignore. Manley highlighted some of the concerning issues stemming from this inscrutability:

“From AI hallucinations like when you ask a chatbot a question and it comes up with completely the wrong answer, to copyright issues that are especially harmful to the creative world, to the perpetuation of negative bias be it against women, people of color, or other marginalized groups,” said Manley.

The biases and stereotypes ingrained in training data can become amplified and codified into the models, leading to unfair and discriminatory outputs.

“Because deep learning is built upon probability, it makes it especially susceptible to bringing up stereotypes that no longer have a place within our society,” Manley warned.

Beyond bias, the lack of understanding around how outputs are generated opens the door to inconsistencies, factual errors, and hallucinatory “confident misinformation” from language models. This undermines trust and raises concerns around the use of AI for high-stakes applications like healthcare, finance and law.

Manley argues that brushing these ethical risks aside in pursuit of convenience is short-sighted.

“For us to keep using AI, we have to trust it, and right now we can’t because we simply don’t understand enough about how it works,” she said.

The path forward requires a radical shift in how the AI community approaches deep learning. We need a concentrated effort toward developing more interpretable and robust models, rigorous auditing for bias and errors, and a broader societal dialogue about the norms and guard rails we want for AI development.

As Manley eloquently stated: “The key to unlocking trust in the realm of deep learning is for consumers just like ourselves to play a big part in the conversation — not being in the dark about how deep learning is being built and its limitations.”

Only by bringing the ethical vulnerabilities of deep learning out of the shadows and into the light can we shape AI to truly be a beneficial “technology of tomorrow that uplifts our humanity.”

Featured image: Credit: TEDx

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape