As the possibilities of artificial intelligence (AI) unfold, the challenges extend beyond technical breakthroughs to confront our deepest questions about emotion, ethics, and coexistence. Linda G. Mills, President of New York University, offered a unique perspective, urging us to consider how AI will reshape our understanding of human experience. In her keynote address, “The Emotional Conundrum of Artificial Intelligence,” Mills questioned the boundary between artificial and emotional intelligence: “The conflict between artificial and emotional intelligence is central to the journey to coexistence.”
In Mills’ view, the stakes are high, especially as AI begins to permeate emotionally sensitive domains, from grief to therapy. Mills recalled a powerful moment in a Korean documentary where a mother connects with a digital recreation of her deceased daughter, facilitated by AI and VR.
“While her virtual daughter did not always seem like her real daughter, her own feelings — her longing, her anguish — were genuine,” Mills observed, emphasizing that the technology granted “a wonderful dream” but raised questions about the nature of grieving. This story illustrates AI’s potential to reframe human emotions, yet Mills warns against substituting human connections with artificial experiences, as such encounters blur the line between solace and exploitation.
AI’s emotional impact extends to fields like psychotherapy, where it can assist therapists in real time by analyzing sessions and suggesting interventions. However, as Mills notes, true healing requires self-reflection that only the individual can bring about: “Neither a therapist nor a program can spit out an answer for a patient… the patient must arrive at a fresh understanding of the problem all on her own.” Here, Mills underscores a critical limitation of AI — it can guide but cannot replicate the journey of self-discovery that is uniquely human.
One of Mills’ most pressing concerns is that as AI takes on emotional roles, society may risk “forgetting what real love is.” In her discussion with philosopher Shannon Valor, she recalls Valor’s argument that “AI can devalue our humanity only because we already devalued it ourselves.” Mills echoes this sentiment, suggesting that a society that allows machines to superficially simulate emotions may weaken its own grasp on authentic human connections. Valor’s cautionary words frame a choice: “The more we permit virtues like love to be superficially presented by AI systems, the more likely we are to lose sight of what those virtues actually mean.”
Mills also recognizes that AI’s allure lies in its potential to solve real problems — from environmental crises to public health. Yet, she cautioned: “It’s too easy to get swept away in AI’s potential or the opposite, to become totally dismissive of it, certain that it’s all hype with little substance.”
Mills reminds us that society has faced similar technology-driven debates in the past, be it over calculators in education or video games in the realm of mental health. The outcome, she believes, lies in our ability to “move beyond the superficial pro-and-con debate and focus on the emotions AI itself does not yet understand.”
For Mills, the path forward is one of nuanced engagement, advocating for a future in which AI supports humanity rather than diminishes it. In closing, she poses a challenge to the audience: “Who do we as humans want to be?” AI, Mills contends, should serve as a mirror, urging us to hold tight to the virtues that define us — chief among them, “the relentless desire for peace” and “our incredible capacity for love.” Mills’s message is clear: in navigating this new terrain, we must ensure that AI enriches rather than erodes the qualities that make us truly human.
Featured image: Credit: NYU Photo Bureau, AuthorSamuel Stuart Hollenshead, Wikipedia