Humans will lose control of AI when it becomes too smart - psychic warns

Brazilian psychic Athos Salome says artificial intelligence is only getting smarter but may be under the control of something inhuman with unknown motives.

 Will humanity be able to keep AI in check? (illustrative) (photo credit: PEXELS)
Will humanity be able to keep AI in check? (illustrative)
(photo credit: PEXELS)

Brazilian psychic Athos Salome warned that artificial intelligence is only getting smarter and eventually, humanity will lose control of its own creation, the Daily Star reported.

In an interview with the UK news outlet, Salome, who has been dubbed the "living Nostradamus" for his prophecies, said that "If an AI surpasses human capability in all areas, we could lose control over its actions and consequences" and called for talks to be held on preventing any of these possible outcomes. 

Moreover, Salome told the Daily Star that AI has "motivations unknown" and may be "controlled" by something else – something inhuman.

AI regulation: Will artificial intelligence put humanity at risk?

AI technology has seen huge leaps in advancement in recent years, with it being adopted by different industries across the global economy. 

These include arguably the most well-known AIs, generative AIs like the ChatGPT chatbot and the DALL-E art generator.

Will AI be capable of overpowering humanity? (credit: Wikimedia Commons)
Will AI be capable of overpowering humanity? (credit: Wikimedia Commons)

However, some have taken note of the possible dangers posed by AI. 

According to a survey from Stanford University's Institute for Human-Centered AI, a third of researchers felt AI decision-making could lead to a nuclear-level catastrophe.

It further noted that someone could make an AI that could lead to disaster for humanity – something someone already tried with the AI chatbot ChaosGPT, which has the goal of destroying humanity, taking over the world and becoming immortal. 

In addition, despite safeguards put in place to prevent AI from doing anything that could be harmful such as making computer viruses or disseminating false information, there are still workarounds that people have found.

One example noted in Stanford's report is that of researcher Matt Korda, who managed to trick ChatGPT into giving rather precise estimations, recommendations and instructions needed to build a dirty bomb

However, this is also a topic world leaders seem to be taking seriously, with the G7 set to meet to discuss the problems posed by AI. In addition, the European Union is also moving towards enacting the world's first major legislation on AI, inspiring other governments to consider what rules should be applied to AI tools.

Aaron Reich contributed to this report.