There is huge promise in AI to transform spacecraft design.
From near real-time performance calculations that enable broader design space exploration, to generative algorithms using mission requirements as inputs to produce high-performing variants that would not have previously been considered, the opportunities with AI are vast.
But this is still relatively untested territory, and engineering teams are seeking further evidence of its reliability and viability prior to bringing it into production.
This is because AI initiatives in design and simulation are stuck. Not because the algorithms don’t work. They do. The real problem is data.
If you’re an aerospace, space, or defense manufacturer, you might already have terabytes of CAD files, simulation outputs and test results. But is that data clean? Relevant? Structured? If not, AI tools have nothing to learn from and nowhere to go.
That’s why, for many engineering leaders, AI still feels more like a buzzword than a breakthrough. To unlock its value, we need to put engineers back in control, with engineer-in-the-loop AI that builds on structured data, traceable logic and domain expertise.
The aerospace industry has never been short on complexity. Simulating spacecraft performance, such as orbital dynamics, thermal shielding, and structural loads, requires precise physics and high-fidelity modeling. Today’s AI algorithms are capable, but they lack fuel: high-quality, usable, simulation-ready data.
Engineering data is different from business data. It’s rarely structured. CAD geometry based on B-reps or NURBs isn’t AI-friendly. Simulation results are often buried in siloed folders, tied to specific solvers, and generated through brittle workflows that break when you try to scale them.
Most aerospace organizations don’t have a data lake — more a swamp. And AI can’t learn from a swamp.
Spacecraft systems are highly integrated. A change in mass distribution, thermal dissipation, or aerodynamics can ripple across the entire design. For instance, improving thermal performance might require more surface area, adding drag and altering structural load paths. These interdependencies slow iteration. Multiphysics simulations take days to converge and must be rerun with each change. Training a machine learning (ML) model on thousands of runs is nearly impossible without major automation and computational power.
Even when automation is in place, workflows often break down. A slight CAD issue causes a meshing failure. A solver crash ends a run midstream. Without robust pipelines, even well-designed experiments produce fragmented data.
And without a governed workflow (versioned, reviewable and physics-aware) no amount of ML can deliver results engineers can actually use.
So the result isn’t a lack of AI capabilities, it’s a lack of usable inputs.
Despite these challenges, forward-leaning engineering teams are finding ways to apply ML in meaningful ways, particularly in areas with repeatable simulations and high return on speed.
One global aerospace company that we’ve worked with set out to optimize the internal geometry of a heat exchanger using AI. First, they parameterized the geometry to make it easily adjustable. Then, they automated the design-of-experiments process, running 400+ high-fidelity simulations in under eight hours. That clean, structured dataset became the training ground for a surrogate model that could predict full velocity and pressure fields, not just summary metrics. With inference times measured in milliseconds, the team wrapped an optimizer around the model to run inverse design studies in real time. What used to take weeks happened in minutes, and the quality and consistency of the underlying data made it possible.
In a recent workshop, engineers trained a surrogate model to predict aerodynamic performance from parameters like wing sweep and fuselage length. Once trained, it powered an inverse design loop: enter a goal, like maximizing payload for a 1,200-mile range, and generate viable airframes in seconds. Thousands of optimizer-driven iterations, guided by high-fidelity data, converged on mission-ready designs. With quality data and flexible geometry, engineers shifted from evaluating designs to generating them.
These results weren’t possible with black-box AI. They were enabled by structured modeling, simulation-aware geometry, and traceable logic. Exactly the ingredients required for certifiable, engineer-in-the-loop design.
Not every engineering problem needs ML, but some problems are ideal for it. The key is knowing the difference. Ask yourself three questions:
1. Does your problem have a strong physics foundation?
ML builds on physics; it doesn’t replace it. If your simulation tools already solve the physics well (such as considering fluid flow, thermal transfer, structural analysis), that’s a strong foundation. These problems generate structured data that’s great for ML.
2. Is simulation speed a bottleneck?
ML excels when it replaces something slow. If you’re spending hours or days running high-fidelity simulations, a trained surrogate model can deliver near-instant predictions. But if your models are already fast and efficient, ML may not offer a meaningful advantage.
3. Do you have the right data — or a way to create it?
No data, no ML. Even the most advanced models fail without structured, reliable datasets. If your workflow already produces clean, reusable simulation results, you’re in a great spot. If not, you’ll need a way to generate that data at scale.
It’s also worth considering how usable your model outputs are. Some AI systems may deliver design geometries that look impressive at first glance, but offer no clear way to inspect, refine or even manufacture them.
Before you adopt any AI-driven design tool, ask yourself: “Will I be able to trace this back to my original physics model? Can I make changes? Can I trust it in production?” If the answer is no, the long-term value of that tool is limited.
If that data isn’t versioned, traceable and tied to physics-grounded models, trust and certification will be out of reach.
If you’ve answered yes to the questions above, here’s how to get started using AI.
The goal isn’t to automate design decisions, it’s to empower engineers. The real power of ML is unlocked when engineers can work with the results — refining inputs, adjusting constraints and understanding how the model behaves. If a tool can’t deliver editable geometry that fits seamlessly into your design process, it will quickly become a dead end.
AI in spacecraft design isn’t stuck, it’s scaling up. The organizations seeing real results treat simulation data as capital, not a disposable output. They’re investing in robust pipelines, clear targets, and scalable datasets — because without the right data, even the best AI is just guessing.
Engineering judgment still leads. But with the right AI tools, engineers can explore more, iterate faster and make better-informed decisions. In space, where every iteration costs time, mass and money, speed alone isn’t enough. Defensible speed with traceable logic, simulation-ready geometry, and certifiable outputs is what sets real engineering workflows apart from AI hype.
Todd McDevitt is Director of Product Management at nTop, where he helps companies solve complex design and manufacturing challenges using implicit modeling and automation. With over 20 years in engineering simulation software, including leadership roles at Ansys and MSC Software, he’s led cloud transformations, scaled product strategies and built high-impact teams across product, marketing and operations.
SpaceNews is committed to publishing our community’s diverse perspectives. Whether you’re an academic, executive, engineer or even just a concerned citizen of the cosmos, send your arguments and viewpoints to opinion@spacenews.com to be considered for publication online or in our next magazine. The perspectives shared in these op-eds are solely those of the authors.