AI Struggles with Differentiating Right from Wrong, One Tech Expert Claims

AI Struggles with Differentiating Right from Wrong, One Tech Expert Claims

The rapid evolution of artificial intelligence (AI) presents us with profound moral dilemmas, particularly in its ability to discern right from wrong. In just the past year, AI technology has advanced significantly, yet it grapples with a critical issue that could lead to severe real-world impacts: AI bias. This bias arises from the very human trait of having conscious or subconscious prejudices based on race, gender, or socio-economic status. Since humans are behind the creation of AI models, these models can inadvertently mirror and perpetuate societal biases, a point emphasized by IBM.

This phenomenon often originates from the data used to train AI systems. These models employ complex algorithms to analyze vast data sets, learning to recognize patterns that they then apply to new data. However, if the training data is already biased, the AI is likely to learn and replicate these biases.

“The core data on which it is trained is effectively the personality of that AI. “If you pick the wrong dataset, you are, by design, creating a biased system,” Theodore Omtzigt, chief technology officer at Lemurian Labs, told CNBC Make It last week.

Consider, for example, an AI system used for screening job applications. If a company historically favours male over female candidates and this data trains the AI, the system might unduly favour male applicants, perpetuating the existing bias. In response to this challenge, tech companies are actively working to reduce bias in AI models.

Say you’re training an AI chatbot using dataset “A,” which is biased in one particular way, and dataset “B,” which is biased in a different way. Even though you’re merging datasets with separate biases, that doesn’t mean those biases would necessarily cancel each other out, Omtzigt continued, before adding that combining them hasn’t taken away the bias, it has just now given you [an AI system with] two biases.

OpenAI, for instance, trains its models to predict the next word in a sentence using extensive internet-based datasets. These datasets, however, can reflect existing biases found in the billions of sentences they comprise. To counter this, OpenAI employs human reviewers who adhere to specific guidelines to refine the models, assessing and adjusting the AI’s responses to various inputs.

Google, too, is taking steps to improve its Bard AI chatbot, relying on its established “AI Principles” and human feedback and evaluations. These efforts underscore the industry’s commitment to mitigating AI bias, ensuring that AI technology advances in a way that is both ethical and beneficial to society.

Omtzigt emphasizes that every dataset has its limitations and inherent biases. He advocates for the necessity of having individuals or systems in place to scrutinize the responses of AI models for potential biases and to evaluate whether these outputs are immoral, unethical, or fraudulent. He believes that when AI is provided with feedback on these aspects, it can leverage this information to enhance and refine its future responses.

From Omtzigt’s perspective, the fundamental issue lies in the AI’s inability to discern good from wrong. He stresses the importance of critical thinking and skepticism on the part of those receiving information from AI. According to him, it’s crucial to question the veracity of the information provided by AI, asking oneself, ‘Is this true?’

Featured image: Left, Theodore Omtzigt. Credit: SC17