Today’s technology is ready for the modern battlefield. There are AI-enabled sensors, autonomous drones, precision navigation, and software that can fuse intelligence in seconds. Yet defense programs still too often fail to deliver the right capability, on time, and within budget. The reason is usually not the technology itself, but something else entirely – a systems failure rooted in integration breakdowns, mismatched interfaces, and organizations built in silos. 

In other words: The lab demos usually work. The fielded system often doesn’t.

Israel’s security environment rewards speed, adaptation, and resilience –but complex programs rarely fail because an algorithm is a few percentage points less accurate than promised.

They fail because the radar “speaks” a different language than the command-and-control node; because one subcontractor’s data model cannot be consumed by another’s; because requirements shift without traceable impact analysis; because teams optimize their own subsystem instead of the mission thread – and sometimes because the operational user was never truly in the loop.

These are only a few of the familiar moments in a standard defense project. It is an orchestra of stakeholders, companies, mission requirements, and procurement standards – and without firm direction, it produces cacophony, not capability.

Israel Ministry of Defense (IMOD) Director General, Maj. Gen. (Res.) Amir Baram at the Israeli pavilion at the Singapore airshow, February 2026
Israel Ministry of Defense (IMOD) Director General, Maj. Gen. (Res.) Amir Baram at the Israeli pavilion at the Singapore airshow, February 2026 (credit: IMoD)

The American way, the Israeli way

The United States has built some of the most complex defense systems in history, and it typically does so through a deliberately gated acquisition machine: formal milestones, decision points, and documentation requirements designed to control risk and lock in performance before a program advances. The framework is meant to produce resilient capability at scale – repeatable, auditable, and survivable under scrutiny – but the price is often time and money, especially when requirements and stakeholders multiply.

Israel’s defense ecosystem, by contrast, is optimized for speed under pressure. It has produced landmark systems of its own: the Merkava, Iron Dome, and Arrow 3, now being procured by Germany in a deal widely reported in the roughly $3.5 to $4.6 billion range, depending on currency and scope. 

Israel’s model is more agile, more tolerant of iteration, and less afraid to bend convention in order to deliver quickly. But that same flexibility can become a liability. Without strong systems leadership – clear interfaces, disciplined integration, and continuous involvement of the operational user – agility turns into fragmentation, and fragmentation turns into field failure.

When ‘one good system’ becomes a program killer

The first failure in many defense programs happens before a single line of code is written if requirements are unclear. Too often, the customer expresses a need in natural language, such as “We need one good system” – without defining what “good” means in measurable, testable terms. That can look like a contractor’s dream, because a user with no explicit demands might sign off on almost anything.

In practice, it is the worst possible starting point.

It forces the defense team to run two hard processes in parallel, early and fast. It must both uncover the real operational need and translate it into precise requirements down to the last interface, performance threshold, and acceptance test. This work is slow, and it is frequently under planned – treated as administrative overhead instead of essential engineering. The schedule moves on anyway, and system engineers are pushed into guesswork: vague requirements, partial definitions, or “to be decided” (TBD) placeholders. The program then bets – quietly – that there will be time later to resolve the TBDs and that the architecture will absorb new requirements without disruption.

There usually isn’t, and it usually won’t.

Schedules are agreements, not spreadsheets

Then the contract is signed, the project plan is approved, reviews and deliveries are mapped out – and for a moment, everyone is satisfied. The trouble begins when the first real problem appears, not because the problem is catastrophic, but because the program was never built to absorb it.

Schedule “protection” often becomes a quiet internal game. An engineer adds a 10 percent cushion, the team lead adds another 10%, the project manager adds more “just in case” – and then senior leadership cuts the overall timeline aggressively because they assume everyone has padded their timelines. The end result is a schedule that is shorter than the honest baseline – with risk still present, only now it is hidden, unmanaged, and guaranteed to explode during integration.

This is avoidable. Better practice is to keep task estimates realistic and manage contingency explicitly at the project level, using established scheduling disciplines and risk analysis so buffers are visible, governed, and monitored – rather than buried inside every task and then erased in a top-down haircut.

‘Success-oriented”+’ isn’t the same as successful

In Israeli program culture, optimism is a feature and sometimes a flaw. “No need to plan; we’ll manage.” The question is not whether it works, but what it costs when a program meets reality. No program can, or should, try to predict every scenario. Attempting to map every contingency can paralyze execution.

The problem is when the pendulum swings too far the other way: when few scenarios are seriously stress-tested, and when contingency plans lack the required funds in time and budget to be efficient. Then a small, manageable issue triggers expensive second-order effects: reworking, schedule shocks, late integration surprises, and emergency procurement.

Risk management has a built-in public-relations paradox: When it works, nothing happens. No crisis, no dramatic recovery, no headline – just quiet prevention. The result is predictable: Leaders conclude that the process is nonessential, budgets get squeezed, and the “insurance” is removed precisely because it did its job.

Worse, the work often lands on the most overloaded role in the program, the systems engineer.

When risk mitigation is treated as a side task-assigned without authority, without time, and without dedicated resources-it becomes performative. On paper, risks are “managed.” In reality, they are simply documented and deferred, until they resurface during integration-late, expensive, and politically painful.

Optimism, disciplined

To end on an optimistic note – one that fits the Israeli instinct to solve problems rather than admire them – the changes required are a matter of will, not scale. 

Programs don’t need more paperwork; they need more deliberate pauses across the lifecycle to plan and prepare for both the foreseeable and the unexpected. Done consistently, that discipline produces operationally meaningful contingency plans – mitigations that can be activated immediately when reality deviates from the plan.

But it also requires a cultural shift. A company must convince its people that “value” is not perfect engineering in isolation; it is the customer’s mission success. That means delivering the right capability – within the schedule, aligned with the user’s intent, and  within the budget.


The author is a system engineering consultant in the defense industry who carries out research in the application of AI in forensics with MAFAT. He is also a lecturer and researcher at Afeka College of Engineering and head of the product management speciality at Shenkar College of Engineering, Design, and Art.