Speaking at the 2025 World Economic Forum in Davos recently, Google DeepMind’s Chief Operating Officer, Lila Ibrahim, provided insight into AI’s future under the Trump administration, the evolution of the workforce, and the next phase of DeepMind’s AI model, Gemini 2.0.
AI’s rapid development comes with both excitement and responsibility. Ibrahim remains optimistic about its potential despite political shifts.
“I have always been very optimistic about AI, and I think those of us in the field are,” she stated, acknowledging the regulatory changes ahead. “As with any administration change, we are happy to work with the incoming administration and just need the time to work through all the changes that are coming.”
The conversation turned to Gemini 2.0, Google DeepMind’s latest multimodal AI model, built from the ground up to integrate different forms of human communication.
“2024 was an exciting year for us. Midway, we introduced a platform that started to introduce features and agents of how we can make AI more useful — a universal AI assistant for people in everyday life,” she explained. The December launch of Gemini 2.0 was a step toward “seamless interaction with AI while still being grounded in actuality.”
A critical aspect of AI’s future is trust — both in its accuracy and regulation. Ibrahim reinforced DeepMind’s commitment to responsibility, stating: “One of the important things we have seen is it is not always accurate. Having people understand that, we still have the human in the loop to ensure that you are doing your fact-checking, etc.” Ensuring AI aligns with cultural contexts across different regions is another priority, requiring collaboration with governments, academics, and local ecosystems.
As AI continues to reshape industries, concerns over job displacement are rising. Ibrahim, however, sees opportunity in upskilling.
“In my 30 years in tech, I have seen a lot of evolution in the types of jobs people have. What’s important is how we think about reskilling and upskilling. I encourage everyone to start experimenting now with AI,” she urged. Just as digital literacy became essential in the early 2000s, she believes AI literacy will be critical for the workforce of the future.
The next iterations of AI will be increasingly personalized, with models capable of reasoning and executing tasks with user approval. But regulation must keep pace with innovation.
“How do we ensure this technology is regulated in the right-sized way that allows us to have the opportunity while mitigating some of the risks?” she posed. The answer, she believes, lies in global collaboration, responsible development, and continued education.
AI is not slowing down, and neither is DeepMind’s pursuit of building it for the benefit of humanity.