“Anything that can go wrong, will go wrong“ — Murphy’s Law
[Editor’s Note: The popular adage, above, is attributed to Capt. Edward Murphy, USAF — a Research and Development (R&D) officer at the Wright Air Development Center, Wright-Patterson Air Force Base, Ohio.
Today’s guest blogger Dr. Anna M. Gielas explores the three main reasons why systems fail, and proposes how we can institutionalize resiliency to mitigate “Murphy.” Dr. Gielas’ timely post provides us with a detailed prescription (by echelon) for developing tech failure-resilient Soldiers and Leaders — prepared to fight and win decisively in an increasingly complex Operational Environment… where “Murphy” lurks around every corner — Read on!]
It is 2035. A platoon moves through the ruins of a megacity, augmented-reality visors glowing in the dust and smoke. Their drone swarm hovers
overhead, piping live video into every soldier’s display. Then the battlefield begins to lie. On the visor, a blue icon shows a friendly squad one block over. But no squad is there. The swarm’s feed flickers, a convoy of enemy trucks materializes and vanishes like ghosts. The platoon leader hesitates: what is spoofed, what is broken—and what is real?
U.S. forces increasingly rely on advanced technologies, ranging from Artificial Intelligence (AI)-assisted target recognition to wearable biometric monitors and augmented-reality (AR) overlays. Such cutting-edge tools generally fail for three main reasons: adversary attacks, technical malfunctions, and human error. To prepare military personnel for such challenges, commanders must ensure regular training that confronts Soldiers with these failure scenarios—so that adaptation and improvisation become second nature. Continuous, threat-realistic training equips forces with the resilience needed for the emerging battlefield.
To harden personnel against adversary-driven tech failures, training should move into immersive high-stakes simulation—call it “deception lab.” Imagine a
squad or platoon stepping into a shoot house where nothing physical has changed, but their gear is lying to them. The GPS suggests they have drifted 200 meters off course. The blue force tracker shows a teammate where no teammate exists. A drone feed displays a truck that is not really there. In a deception lab, trainers inject manipulated data—spoofed coordinates, altered imagery, counterfeit radiofrequency emissions—straight into the training gear the team uses. The goal is to teach operators to both notice and bypass the tech failures. In the best-case scenario, forces hone a sense of skepticism toward “clean” data feeds and foster a strong sense of when to pause, verify, and adapt. As advanced technologies weave deeper into military operations, so should deception labs into training pipelines.
For training against tech-malfunction failure, create a no-support training scenario in which all devices go dark. Teams must fall back on traditional land
navigation methods such as dead reckoning, terrain association, and celestial fixes, all while staying out of sight and on mission. But losing tech mid-mission requires more than a switch to “old school” skills—units have to understand and anticipate potential second-order effects of the device blackout. For example, they need to be aware that adversaries may still be able to locate or track them via the malfunctioning tech. This training should therefore go beyond building and maintaining a portfolio of traditional skills. It must foster analog competence: the ability to operate and succeed when signals, circuits, and satellites not only fail but also become a liability.
To reduce the risk of failures driven by technology malfunctions, forces should train in two-source cross-checking. For example, if an AI system flags a convoy
as hostile using electro-optical imagery in low light, the unit validates the assessment with thermal imaging, rather than relying on a single feed. Likewise, if an augmented-reality overlay identifies an obstacle in a GPS-denied zone, operators confirm it with a laser range finder or through visual terrain matching. Two-source discipline serves as mission insurance, making it less likely that one faulty sensor, algorithm, or data stream will cascade into a tactical failure.
Human performance experts note that rapid advances in fields such as AI, information technology, and robotics will have a revolutionary impact on the
battlefield. As they observe, “The disruption associated with these technologies will most acutely be experienced by the human combatant at the tactical level, with increasing cognitive demands associated with the employment and use of new capabilities.” Research fields such as Human Factors and approaches like Adaptive Automation help design military technologies that feel intuitive and reduce the cognitive load on soldiers. Nevertheless, training remains a critical safeguard against human-error-based failure.
Such training should place companies in the middle of messy, unpredictable edge cases. Borrowing from the tech sector’s fault-injection testing, “military chaos engineering” can entail deliberately breaking systems and devices
during rehearsal. Trainers can simulate GPS dropouts and introduce clock drift so that systems fall out of sync; or they may overload networks with excessive traffic so that units must address degraded connectivity. To meaningfully reduce human-error-driven failures, these technical disruptions should be combined with conditions that strain human performance such as fatigue and exhaustion. This coupling ensures that errors emerge under realistic pressures, exposing cognitive and behavioral vulnerabilities. This enables Soldiers and leadership to identify common mistakes, correct them, and practice effective responses. Ultimately, these rehearsals build resilient habits that prevent small mistakes from cascading into mission failure.
Additionally, “chaos engineering” drills can be paired with timed exercises that force operators to drop a failed system and bring a backup online. The imposed time pressure heightens stress and surfaces recurring patterns of
human error. For example, personnel could practice shifting from a primary drone feed to a handheld thermal imager, or moving from a main communications network to a low-bandwidth backup. The purpose is to ensure that system transitions are not only technically feasible but also rapid and seamless under stress. Through repetition, these high-pressure switches become automatic and resilient to error.
The training approaches scale differently across Army echelons. At the squad and platoon level, deception labs and no-support drills are most effective. These exercises expose soldiers to spoofed feeds, GPS drift, and sudden system failures, forcing them to rely on analog competence and teamwork. At the company level, chaos-engineering scenarios and two-source cross-
checking become essential, as leaders coordinate multiple platoons and assets under degraded conditions, testing their ability to synchronize backup systems and sustain momentum. At the battalion level, training shifts toward staff resilience. This includes validating information flows, detecting manipulated Intelligence, Surveillance and Reconnaissance inputs, and maintaining command post functionality under electronic or cyber attack. For brigade and higher headquarters, the same principles can be applied in wargames and simulations, injecting adversarial AI deception and contested-spectrum challenges into operational planning training. Taken together, these echelon-tailored scenarios help ensure the Army builds resilience from the individual soldier to the institutional level.
Across all Army echelons, military personnel must cultivate a disciplined skepticism toward cutting-edge technologies. This mindset allows them to adapt to manipulated or failed systems—and to rely on foundational skills like
land navigation, observation, and critical thinking when advanced technological tools become a liability. Those responsible for training should keep one truth in mind: the next decisive battle will not be won by who has the most advanced AI or the newest device—but by who has the best-trained Soldiers to operate with and without them.
If you enjoyed this post, check out the T2COM G-2‘s Operational Environment Enterprise web page, brimming with authoritative information on the Operational Environment and how our adversaries fight, including:
Our T2COM OE Threat Assessment 1.0, The Operational Environment 2024-2034: Large-Scale Combat Operations
Our China Landing Zone, full of information regarding our pacing challenge, including ATP 7-100.3, Chinese Tactics, T2COM OE Threat Assessment 1-1, How China Fights in Large-Scale Combat Operations, 10 Things You Didn’t Know About the PLA, and BiteSize China weekly topics.
Our Russia Landing Zone, including T2COM OE Threat Assessment 1-2, How Russia Fights in Large-Scale Combat Operations and the BiteSize Russia weekly topics. If you have a CAC, you’ll be especially interested in reviewing our weekly RUS-UKR Conflict Running Estimates and associated Narratives, capturing what we learned about the contemporary Russian way of war in Ukraine in 2022 and 2023 and the ramifications for U.S. Army modernization across DOTMLPF-P.
Our Iran Landing Zone, including the Iran Quick Reference Guide and the Iran Passive Defense Manual (both require a CAC to access).
Our North Korea Landing Zone, including Resources for Studying North Korea, Instruments of Chinese Military Influence in North Korea, and Instruments of Russian Military Influence in North Korea.
Our Irregular Threats Landing Zone, including TC 7-100.3, Irregular Opposing Forces, and ATP 3-37.2, Antiterrorism (requires a CAC to access).
Our Running Estimates SharePoint site (also requires a CAC to access) — documenting what we’re learning about the evolving OE (including Russia’s war in Ukraine war since 2024 and other ongoing competitions and conflicts around the globe). Contains our monthly OE Running Estimates, associated Narratives, and the quarterly OE Assessment Intelligence Posts.
Then review the following related Mad Scientist Laboratory content:
… on integrating AI into Warfighting:
China’s Emerging Technologies Highly Likely to Undermine U.S. & Allied Advantage by 2035, by proclaimed Mad Scientist COL Byron N. Cadiz
“Intelligentization” and a Chinese Vision of Future War
China’s PLA Modernization through the DOTMLPF-P Lens, by Dr. Jacob Barton
Hybrid Intelligence: Sustaining Adversary Overmatch and associated podcast, with proclaimed Mad Scientist Dr. Billy Barry and LTC Blair Wilcox
Artificial Intelligence (AI) Trends
Takeaways Learned about the Future of the AI Battlefield
Artificial Intelligence: An Emerging Game-changer
Training Transformed: AI and the Future Soldier, by proclaimed Mad Scientist SGM Kyle J. Kramer
Rise of Artificial Intelligence: Implications to the Fielded Force, by John W. Mabes III
Integrating Artificial Intelligence into Military Operations, by Dr. James Mancillas
“Own the Night” and the associated Modern War Institute podcast, with proclaimed Mad Scientist Bob Work
Bringing AI to the Joint Force and associated podcast, with Jacqueline Tame, Alka Patel, and Dr. Jane Pinelis
… on trust and man-machine teaming:
AI Enhancing EI in War, by MAJ Vincent Dueñas
The Human Targeting Solution: An AI Story, by CW3 Jesse R. Crifasi
An Appropriate Level of Trust…
… on disruptive technologies:
Project Deterrence – Disruptive Technologies
Quantum Conundrum: Multi-domain Threats, Convergent Technology & Hybrid Strategy, by Robert McCreight
>>>Announcement: Annotate your calendars now — the Army Mad Scientist / William & Mary Great Power Competition & Conflict in an Age of Authoritarian Collusion Virtual Event, on Tuesday, 27JAN26:
Who: The Army Mad Scientist Initiative and William & Mary’s Whole of Government Center of Excellence
What: A virtual event exploring the Operational Environment implications of emerging trends gleaned from contemporary conflicts and proxy wars, as well as the expanding adversarial influence and presence in the Global South and polar regions, through the lens of authoritarian collusion
When: Tuesday, 27 January 2026
Where: Virtual via Zoom.gov; in-person on campus for local T2COM G-2 and FCC participants
Why: To learn from subject matter experts within academia and the Department of War about the implications of authoritarian collusion, ultimately expanding our understanding of the Operational Environment
Register to attend this informative event virtually at our EventBrite site.
>>>Reminder: Army Mad Scientist is CALLING ALL CREATORS with our Multi-Media Contest for imaginative thinkers who seek to showcase their ideas about Army Transformation in novel, alternative ways. Check out the contest’s guidelines here, consult your inner muse, unleash your creative talent, get cracking developing your entry, and submit it to ArmyMadSci@gmail.com — Deadline for submission is 14 February 2026!
About Today’s Author: Anna M. Gielas holds a PhD in the history of science from the University of St Andrews (United Kingdom). After earning fellowships at Harvard University and, most recently, the University of Cambridge, she is currently working on a monograph on the integration of emerging technologies into the Armed Forces.
Disclaimer: The views expressed in this blog post do not necessarily reflect those of the U.S. Department of Defense, Department of the Army, or the Transformation and Training Command (T2COM).

