Speaking at Tel Aviv Cyberweek, former Central Intelligence Agency chief technology officer Bob Flores explained that the Internet’s creators had failed to implement security protocols early on.
“We can't build artificial intelligence (AI) with the same mistakes with which we created the Internet,” he said on Tuesday. “We're still trying to catch up on the Internet because we didn't put security in there from the get-go.”
On the sidelines of the conference, Flores told The Jerusalem Post: “When creating something new, and especially nowadays, it's important to establish a security framework to prevent its use in a malicious way.” The Internet, especially the Dark Web, he said, remains a “Wild West where we don't really understand what is happening.”
“Instead of building something to innovate and 10 years later trying to modify it to comply with the security requirements,” Flores said, “We should do the work from the beginning and get these security components [incorporated] as a key part of the development.”
New AI threats, new AI defenses
He also outlined several emerging AI-driven threats, including the rapid development of malware using AI tools, the ability of AI agents to infiltrate financial and security institutions, and the constant need to adapt defenses as these systems evolve and work.
The former CIA officer also highlighted the current vulnerabilities in AI systems. Data poisoning creates useless agents, and supply chain tampering affects the AI systems. Additionally, hardware-level compromises and a lack of basic system hardening create vulnerabilities that must be addressed while still in development.
Concurrently, AI development has introduced several tools to identify threats and work against them. Flores pointed out that a good AI defense must leverage the enhanced security guarantees that an AI agent can provide. He emphasized that current models already offer improved trust and identity verification mechanisms in digital systems, which should be further developed.
Flores argued that the introduction of modern validation and governance frameworks into AI systems will be the key to strengthening defense capabilities.
Future AI threats and next steps
“First, even if not developed yet, is quantum computing,” he said, outlining what he considers the key threats to AI systems in the future. “Once it arrives, it will be a game-changer that will need to be addressed.”
To address this, Flores said that AI development must be carried out with security mechanisms in mind, while developing common standards and frameworks necessary for consistent security practices. He also insisted on the need for meticulous work when training an AI model.
“If we don't build security into it from day one, it's going to come back to bite us,” he told the Post, adding that “AI is only as good as the data feeding it, and the math behind it, by the way. So if garbage goes in, garbage gets out. It’s a real thing.”