Search
Close this search box.

Why Companies Need ‘Responsible AI Officers’

Screenshot (2031)

Why Companies Need ‘Responsible AI Officers’

AI is the future, there no getting around that, and countless business leaders are left to ponder the questions of the ethical implications and responsible deployment of that. In a recent Fortune panel discussion, Tracy Kerrins of Wells Fargo and KC McClure of Accenture delved into why companies need dedicated “Responsible AI Officers” to navigate this complex landscape.

Kerrins, head of technology at Wells Fargo, stressed the critical importance of data quality and governance when it comes to AI.

“Good data in equals good data out, bad data in equals bad data out,” she said. “We have really robust data and model risk governance, and it’s taking the time to do all of that analysis upfront — make sure you’re complying to compliance, law, rule, reg, you have ethical checks.”

She explained how Wells Fargo has instituted rigorous vetting processes before approving any AI use case for production.

“We do a lot of piloting, we do a lot of checks,” she said. “Based on the risk profile of the model, we go back and we look at the output because you have to be careful of model drift.”

KC McClure, Accenture’s CFO, echoed the need for robust ethical frameworks around AI adoption.

“To really unlock the value and innovation, and to do it in a way that is taking care of people as well, you need to make sure that [ethical AI] is really at the heart of everything that you do,” she stressed, before revealing: “At Accenture, we really start with responsible and ethical AI. We just named a chief responsible AI officer this week.”

This dedicated role underscores Accenture’s commitment to embedding ethical principles into all AI initiatives across its global workforce of over 740,000.

Both leaders prioritize collaborative, cross-functional approaches as key to responsible AI. As Kerrins noted: “When you look at generative AI, it’s all about data, large language models and then the computing power needed to run those models. We have set up a generative AI council that includes legal, risk, compliance — we all review use cases.”

“We have designed what we call an ‘AI Navigator’ that helps take a look at your different industries and functions. We really look at the value case, the AI architecture, and then the AI solution,” McClure added. This strategic framework guides clients in making ethical, value-driven decisions around AI adoption.

In essence, companies can no longer treat AI as a compliance issue or technical implementation. Responsible AI requires C-suite vision, oversight, and cultural alignment — embodied by a Chief Responsible AI Officer. As AI accelerates, organizations must have this ethical anchor to reap AI’s benefits while mitigating risks to maintain stakeholder trust.

Featured image: Credit: Fortune