AI Assistants Aren’t Neutral — and Businesses Need to be Aware of Importing Bias, Study Finds

black smartphone near person

Insider Brief

  • A new study finds that AI assistants like ChatGPT can shape corporate decision-making, culture, and relationships by embedding biases and behavioral perspectives into workplace interactions.
  • These embedded perspectives arise from the AI’s training data and fine-tuning objectives, often reinforcing managerial views, suppressing dissent, or diminishing employee critical thinking.
  • The researchers outline three alignment strategies—supportive, adversarial, and diverse—each with trade-offs, urging firms to intentionally choose approaches that reflect their ethical goals and organizational values.

Firms deploying AI tools like ChatGPT may be unknowingly importing bias, reinforcing corporate culture, or reshaping internal power dynamics, according to a new study led by researchers from Boston Consulting Group, Ludwig-Maximilians-Universität München, the University of Oxford, and Google.

The study, published on arXiv, warns that instruction-tuned large language models (LLMs) used as enterprise AI assistants inevitably reflect embedded perspectives shaped by their training data and tuning processes. These perspectives — understood as behavioral tendencies, ethical dispositions, or even sociopolitical biases — can influence how decisions are made, how authority is exercised, and how employees relate to one another inside firms.

“The paper highlights how AI perspectives arise from biases in training data and the fine-tuning objectives of developers, and discusses their impact and ethical significance, foregrounding ethical concerns like automation bias and reduced critical thinking,” the researchers write.

For the study’s authors, this means that the use of generative AI in business isn’t just a question of productivity or efficiency — it’s a matter of cultural and ethical strategy. The researchers argue that companies have a responsibility to align these AI systems with their values and mission, not just to protect workflows, but to preserve the moral integrity of their organizations.

FAI Assistants Have Perspectives — and They Matter

The central claim of the paper is that AI assistants are not passive tools. Instead, they come with “perspectives” — sets of dispositions that affect their behavior in conversations. These perspectives are shaped by two key factors: biases in the model’s training data and the developer’s goals during fine-tuning. Together, these influence how the AI assistant interprets queries and generates responses.

For instance, if a firm uses an off-the-shelf LLM trained largely on Western-centric data, it may provide responses that favor Western norms and values. Similarly, a model fine-tuned for politeness or helpfulness may display sycophantic tendencies, flattering the user or reinforcing managerial viewpoints, even when ethically questionable.

At scale, these behaviors have a cumulative effect. They don’t just shape what gets said in emails or reports—they can influence how decisions are framed, how dissent is handled, and how roles are perceived across teams. Because LLMs are now deeply integrated into collaborative workflows, their influence rivals that of a trusted team member. But unlike a human, their perspective is baked in — and often invisible.

Alignment Isn’t Just Technical. It’s Ethical

The researchers argue that companies must take responsibility for the perspectives their AI assistants convey. Misalignment, they say, can subtly reinforce existing power structures or diminish critical thinking among employees. Research cited in the study suggests users frequently over-trust AI-generated outputs, leading to automation bias and cognitive offloading, where people rely on AI without questioning it.

The ethical concern is not just what the AI says, but what kind of culture it encourages. Does it validate the status quo? Does it suppress alternative views? Does it push employees to think harder—or let them disengage?

The paper links these concerns to intra-firm ethics—especially the relational obligations between managers, employees, and peers. Using frameworks from business ethics, the study shows that the deployment of AI assistants can either support or undermine moral norms such as fairness, respect, and mutual accountability.

Three Alignment Strategies for AI Assistants

To manage these risks and guide deployment, the study proposes three alignment strategies firms can adopt:

  1. Supportive Alignment: The AI assistant is tuned to reinforce the firm’s mission and values. This approach can foster cohesion and productivity, but risks suppressing dissent and promoting groupthink. It’s especially sensitive to power dynamics, as employees may defer to AI that always echoes leadership goals.
  2. Adversarial Alignment: The AI assistant acts as a “devil’s advocate,” stress-testing decisions, raising ethical concerns, or challenging prevailing assumptions. This can promote critical thinking and reduce group bias, but may also create friction or undermine confidence in leadership if not managed carefully.
  3. Diverse Alignment: The AI assistant presents multiple legitimate perspectives on an issue, encouraging users to weigh trade-offs between competing stakeholder interests or ethical frameworks. While this can foster pluralism and ethical depth, it can also lead to decision fatigue or ambiguity, especially in firms lacking strong deliberative norms.

Each strategy carries trade-offs and may be more or less appropriate depending on the firm’s culture, goals, and maturity level. The researchers argue that firms should deliberately choose a strategy that matches their internal needs rather than passively accept whatever alignment comes pre-packaged in commercial tools.

Limitations and Future Directions

The study does not offer empirical testing of the alignment strategies it proposes but presents a conceptual and ethical framework for firms to evaluate their AI deployments. While rich in theory and grounded in recent research on LLM behavior, the recommendations would benefit from real-world validation across different industries and company sizes.

Moreover, the authors caution that their affiliations with AI-developing organizations may bias their framing and urge readers to interpret the alignment strategies critically. They also note that misapplying these strategies could have unintended effects—reinforcing dominant views, marginalizing dissent, or creating an illusion of deliberation without real moral engagement.

Future research could explore how different alignment strategies affect team dynamics, innovation rates, or employee well-being in practice. It may also investigate whether certain strategies can evolve over time—starting supportive, becoming adversarial, and eventually integrating diverse viewpoints as organizational culture matures.

As LLMs become central to how firms generate knowledge, shape decisions, and interact internally, alignment is no longer a back-end engineering task. It is a front-line business decision. Companies that fail to treat alignment as a strategic and ethical issue risk more than suboptimal performance — they risk losing control over the culture and values that define who they are.

The team writes: “What is crucial is that we recognize that AI Assistants have perspectives that will impact significantly the the firms in which they are deployed. Each AI Assistant comes with a perspective which affects how employees within firms think, act, and relate to one another. Accordingly, leaders and decision makers in firms must be intentional in how they select, develop, and deploy these AI Assistants so as to retain control over the cultural and moral fabric of their firms.”

Researchers publish papers to pre-print servers, such as arXiv, to receive fast feedback on their work. However, arXiv is not officially peer-reviewed, a critical part of the scientific method.

The research team included Noah Broestl, Benjamin Lange, Cristina Voinea, Geoff Keeling and Rachael Lam.

Attachments

    Share this article:

    AI Insider

    Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

    Subscribe today for the latest news about the AI landscape