Dr. Petar Tsankov, a leading researcher and entrepreneur in AI, has a clear message: it’s time to focus on the real challenges of artificial intelligence (AI). Speaking at TEDxAUBG, Tsankov — who pioneered verification systems for neural networks and co-founded ChainSecurity — shared his vision for a safer future with AI, and why worrying about the right things is critical.
“Many of you already worry about AI safety,” Tsankov began. “But we’ll see that it’s really critical that you worry about the right things.” He believes that while existential fears surrounding AI, like it taking over or causing massive job losses, have been around since the 1950s, they’re not the most immediate issues.
Tsankov argues that today’s AI is not yet capable of artificial general intelligence (AGI), the hypothetical system that could surpass human intelligence.
“We just aren’t quite there yet,” he said. “I do believe that the fears we have about AGI overnight are unfounded, and these are the wrong things to focus on right now.”
The AI we currently use, according to Tsankov, is deeply embedded in our daily lives.
“It’s in cars, hospitals and diagnostics,” he explained. But there are three key challenges with today’s AI that deserve attention: bias, misinformation, and reliability. “The real world is biased, so the data is biased, and all these algorithms are learning these biases,” Tsankov continued, citing examples such as Amazon’s failed attempt to automate hiring due to gender bias.
One of the more concerning issues is the spread of misinformation. Tsankov shared a recent example from Slovakia, where AI-generated content spread false information, impacting national elections.
“This has to be a wake-up call for everybody around the world,” he warned.
Tsankov, however, remains optimistic about AI’s future, saying that although these challenges are difficult to solve, they are indeed solvable. He urged the public to not only be concerned about the current issues but to actively speak out and make their voices heard, underlining that collective action is essential to ensuring AI safety in the future.