Important Takeaways:
- Yoshua Bengio, regarded as one of the godfathers of modern AI, said advances by the Chinese startup DeepSeek could be a worrying development in a field that has been dominated by the US in recent years.
- “It’s going to mean a closer race, which usually is not a good thing from the point of view of AI safety,” he said.
- “If you imagine a competition between two entities and one thinks they’re way ahead, then they can afford to be more prudent and still know that they will stay ahead,” Bengio said. “Whereas if you have a competition between two entities and they think that the other is just at the same level, then they need to accelerate. Then maybe they don’t give as much attention to safety.”
- The first full International AI Safety report has been compiled by a group of 96 experts including the Nobel prize winner Geoffrey Hinton.
- It says new AI models can generate step-by-step technical instructions for creating pathogens and toxins that surpass the capability of experts with PhDs, with OpenAI acknowledging that its advanced o1 model could assist specialists in planning how to produce biological threats.
- Speaking to the Guardian, Bengio said models had already emerged that could, with the use of a smartphone camera, theoretically guide people through dangerous tasks such as trying to build a bioweapon.
- The report says AI systems have improved significantly since last year in their ability to spot flaws in software autonomously, without human intervention. This could help hackers plan cyber-attacks.
Read the original article by clicking here.