Disappointment operations are the ultimate war games of war. Handling enemy commanders while waiting for an attack in the wrong place, or encouraging them to underestimate your strength can be much more powerful than tanks or bombs.
But what happens if the enemy is improved by a thinking computer?
Successful operations must now deceive not only human commanders, but the AI who advises them, according to two officers of the American army. And Russia and China – with their rigid and centralized command and control – can be particularly vulnerable if their AI is deceived.
“Commanders can no longer count on traditional methods of deception such as hiding movements or troop equipment,” said Mark Askew and Antonio Salinas in a test for the modern War Institute in West Point. “Instead, shaping perceptions in environments rich in sensors requires a change in reflection – from the concealment of information to the manipulation of how the enemy, including AI systems and tools, to interpret it.”
Historically, the commanders have made great efforts to deceive enemy generals using a bad direction, lure armies and let the plans of false war slip. Today, nations will have to focus on “food for exact opponents if the deceptive data that can handle their interpretation of information and poorly orient their activity,” said the test
The idea is to transform AI into an Achilles heel of an enemy commander and their staff. This can be done by making “their ineffective AI systems and breaking their confidence in these systems and tools,” suggests the test. “Commanders can overwhelm AI systems with false signals and present it with unexpected or new data; IA tools excel in model recognition, but fight to understand how new variables (apart from their training data) inform or change the context of a situation.”
For example, “slight changes in the appearance of a drone could bring AI to identify it badly,” Askew and Salinas told Business Insider. “People are not likely to be thrown away by small or subtle adjustments, but AI is.”
To determine enemy intentions or target weapons, modern armies are based today on large amounts of data from various sources ranging from drones and satellites to infantry patrols and intercepted radio signals. The information is so generous that human analysts is exceeded.
The 38th Infantry Division of the US military has set up this command post for an exercise in 2023. Master SGT. Jeff Lowry / Us Army
What makes AI so attractive is its speed to analyze huge amounts of data. It is a boon for companies such as Scale IA, which have won lucrative Pentagon contracts.
However, the power of AI also amplifies the damage it can do. “The AI can coordinate and implement erroneous answers much faster than humans alone,” said Askew and Salinas.
The AI of deception can lead to “poor assignment of enemy resources, delayed responses or even friendly fire incidents if AI poorly identifies targets,” the authors told Business Insider. By nourishing false data, one can manipulate the perception of the enemy of the battlefield, creating opportunities for surprise. “”
Russia and China are already devoting great efforts to military AI. Russia uses artificial intelligence in drones and cyber war, while the Chinese army uses the Deepseek system for planning and logistics.
But the rigidity of the Russian and Chinese command structures makes any dependence at AI an opening. “In such systems, decisions are often strongly based on the flow of downward information, and if AI at the top is nourished by misleading data, this can cause generalized judgment errors,” the authors said. “In addition, centralized structures could lack flexibility to adapt or quickly transform information, which makes it more vulnerable to deception if they cannot protect their systems.”
In other words, false images are fed with the sensors of an enemy, such as video cameras, to try to bring AI to rush to the bad conclusion, by blinding the human commander more.
Naturally, China and Russia – and other adversaries such as Iran and North Korea – will seek to exploit the weaknesses of the American AI. Thus, the American army must take precautions, such as the protection of the data which feeds its AI.
Be that as it may, the constant presence of drones in Ukraine shows that the scanning maneuvers and the surprise attacks of Napoleon or Rommel become relics of the past. But as the MWI test points out, surveillance can determine enemy force, but not enemy intention.
“This means that deception must focus on the formation of what the opponent thinks of happening rather than avoiding the detection completely,” said the test. “By creating a credible narrative of deception – through signals, false headquarters and poor logistical orientation – commanders can lead the enemy and human decision -makers to make ineffective decisions.”
Like any scam, military deception is more effective when it strengthens what the enemy already believes. The essay indicates the battle of Cannae in 216 BCE, when a Roman army was almost wiped out by Carthage. Intelligence was not the problem: the Romans could see the Carthaginian forces arranged for the battle. But Hannibal, the legendary commander, deceived the Roman commanders believing that the center of the Carthaginian line was weak. When the Romans attacked the center, the Carthaginian cavalry struck the side in a pliers maneuver that surrounded and decimated the legions.
Two millennia later, the allies used disappointment operations developed to mislead the Germans in the place where the invasion of D -Day would take place. Hitler and his generals thought that amphibious attack would occur in the Calais region, the closest to the Allied ports and the air bases, rather than in the more distant Normandy region. False armies in Great Britain, with tanks and dummy planes, not only convinced the Germans that Calais was the real target. The German high command believed that the landings in Normandy were a feint, and thus kept strong garrisons in Calais to repel an invasion that never came.
Dones and satellites have improved the intelligence of the battlefield to some extent that Hannibal could never have imagined. The AI can sift from large amounts of sensor data. But there is still the fog of war. “AI will not eliminate the chaos of war, deception and uncertainty – this will only reshape the way in which these factors are manifested,” concluded the test. “Although intelligence, surveillance and recognition systems can provide episodic clarity, they will never offer a perfect and real -time understanding of intention.”
Michael Peck is a defense writer whose work has appeared in Forbes, Defense News, Foreign Policy Magazine and other publications. He has a master’s degree in political science of Rutgers Univ. Follow him Twitter And Liendin.
businessinsider