
AI systems could soon be able to hijack satellites in orbit and cause them to collide with other spacecraft, potentially triggering a dangerous cascade of smash-ups that could render the environment around Earth unsafe for years, according to experts.
Cyber security researchers are already using AI to identify so-called zero-day vulnerabilities — yet undiscovered security holes in code — to alert operators and help them patch the problems before hackers could exploit them. But attackers, too, can take advantage of those advanced systems to find those holes more quickly.
Speaking exclusively to Space.com, researchers at the CR14 cybersecurity center in Estonia said that advances in AI could make it possible for an AI-led attack to wreak havoc in orbit in as little as two years. The emergence of so-called agentic AI — autonomous systems powered by Large Language Models (LLMs) such as OpenAI’s ChatGPT or Google’s Gemini, which can independently plan action and execute tasks to achieve set goals — is especially worrying, Kristjan Keskküla, CR14 Head of Space Cyber Range, told Space.com. “AI is developing quite quickly right now,” Keskküla told Space.com. “The real problem now is that AI can act, take decisions, analyze things and come up with new exploits.”
Article continues below
Clémence Poirier, a cyber security researcher at the ETH Zurich University in Switzerland, told Space.com that although no known AI-enabled cyberattack on space systems has taken place so far, state-funded hackers are known to have used LLMs to research space systems vulnerabilities in the past.
“In 2024, OpenAI and Microsoft revealed that Russian threat actor Fancy Bear used LLMs to search about satellite communications, radar systems and other space technologies to support information gathering in view of potential attacks,” Poirier said in an email. “AI definitely helps threat actors in the reconnaissance and intelligence gathering phase of an attack. Threat actors can find known vulnerabilities in space systems with LLMs. The time to exploit known vulnerabilities has been immensely reduced because of AI.”
Andrzej Olchawa, a space cybersecurity engineer and researcher at VisionSpace told Space.com that “LLMs have drastically lowered the barrier to understanding spacecraft operations and communication protocols.”
While in the past, developing an understanding of how space systems operate required extensive study, today, LLMs enable “adversaries with no prior knowledge of the space industry to process documentation and open-source software,” and cause real harm.
“Interpreting telemetry and telecommand structures once required extensive study of thousands of technical pages,” Olchawa said. “Today, one can simply instruct an LLM to generate parsers and provide mission-specific context with minimal expertise.”
What is worse, the accelerated AI threat has emerged just as the space sector began to wake up to the cybersecurity risks, which it had ignored for decades. Many older satellites that are still in orbit and operational have no cyber protection systems in place, said Keskküla, making them a low-hanging fruit for a possible attack.
Many possible ways of attacking a spacecraft exist, including jamming and spoofing of the communication links between the satellites and ground control either from Earth or from space. But the experts are especially worried that hackers could find ways to completely hijack satellites and turn them into orbital anti-satellite weapons.
“They could make them collide with other satellites and cause havoc,” Keskküla said. “In the last about three years, we have sent up 8,000 satellites. It’s a huge number of satellites, and the constellations are growing. You only need to affect one satellite’s actions to cause problems.”
The researchers worry that one such deliberate space crash could create thousands of fragments in the heavily used low Earth orbit — the region of space at altitudes up to 1,200 miles (2,000 kilometers) where most satellites reside — which could make the orbital environment unsafe for years.
CR14 is one of the largest cybersecurity research and training centers in the world, and, thanks to Estonia’s proximity to Russia, has been at the forefront of Europe’s cyber defence against escalating Russian attacks for years.
“During our exercises, we simulate these kinds of attacks in a virtual environment using digital twins,” Keskküla said. “We have attackers, and we have defenders, one group trying to penetrate the system and do bad things, the other trying to protect it.”
Martin Hanson, CR14’s head of communication, added that the quantity and sophistication of cyberattacks is bound to keep rising. Ukraine, he said, experiences “thousands of cyberattacks” on critical infrastructure every day, including on power grids, banks and satellite communication systems.
In Europe, he added, the number of phishing attacks has grown by 500% over the past few years, and the sophistication of those attempts to steal sensitive information by means of social engineering is bound to grow thanks to the use of AI.
“AI will make these attacks more targeted,” he said. “They will gather more information about you, and they will try to copy your friends and coworkers. It’s getting more sophisticated. “






