THE GREAT CONVERGENCE—AI & WAR
By Michael and James Hall
On September 26, 1983, when an automated Soviet early‑warning system falsely signaled that several US nuclear missiles were inbound, it was a human who proceeded to save the world that day, not a machine.
Today, that balance between human judgment and machine speed is shifting faster than at any time in history.
Artificial intelligence is reshaping both peace and conflict at a pace that nations are struggling to match. Across the world, powerful forces are converging. Five pivot points define this moment: Russia’s deepening militarization, drone‑driven warfare in Ukraine, China’s expanding cyber operations, rising instability in the Middle East, and the emergence of the Genesis Project. None of these developments stand alone; each reflects a larger transformation already underway.
Together, they signal the arrival of a new strategic fulcrum—one that balances four shifting models. These are as follows, the fragility of deterrence, the growing dominance of drones & autonomous systems on the battlefield, the manipulation of information through social media, and emerging risks in biological engineering.
That in a nutshell is what this article is about and it’s going to be a good read! You see for the first time, machines are beginning to outpace humans in the core tasks of war.
AI systems can identify targets and coordinate actions faster than any human operator. Drones now move, track, and strike with increasing autonomy. Cyber tools can infiltrate critical infrastructure at a scale no human team could hope to match. Even the planning of military operations—once the realm of generals and analysts—is now being simulated and refined by advanced AI models.
This is not a slow evolution. It is a phase shift, as disruptive as the arrival of gunpowder or nuclear weapons, but unfolding at digital speed. The greatest danger is the widening gap between the rapid advance of military AI and the far slower pace of political systems, treaties, and ethical frameworks. That gap remains largely unaddressed, even as nations move deeper into a technological revolution more complex and volatile than anything humanity has ever faced.
________________
Of particular note among the five pivot points, Russia has shifted into a quasi–Cold War footing, with defense spending consuming an ever‑larger share of its GDP. Civilian industries are rapidly retooled for military production, and long‑term mobilization has become part of national life. This is not improvisation but strategy.
Moscow is betting that continued warfare will ultimately resolve its Ukrainian dilemma, wagering that long‑term strategic gains will outweigh the economic pain it is prepared to absorb. Such a strategy demands a fully mobilized society. Russia’s defense sector now operates on a continuous‑production model in which drones, electronic‑warfare suites, and loitering munitions are updated in months rather than years, and dual‑use factories can shift between consumer goods and military components with minimal interruption.
The catalyst was the war that began nearly four years ago, when what many expected to be a swift Russian victory instead hardened into a grinding war of attrition. In that crucible, drones and autonomous systems didn’t merely support the fighting—they reshaped it. Software, sensors, and swarming machines now matter as much as tanks or artillery, and the battlefield has become a living laboratory where cheap, rapidly iterated technologies routinely defeat expensive legacy systems.
What is emerging is not merely a Russian wartime posture but a prototype for twenty‑first‑century militarization—an economy where autonomy, mass production, and rapid software iteration converge into a continuous engine of conflict. Other nations are studying this model closely, not out of admiration but because it demonstrates how quickly a state can pivot into industrialized, AI‑enabled warfare. Ukraine is not the end of something; it is the beginning of a new era.
When a state reorganizes its economy around militarized production, the momentum of that system can begin to dictate political behavior. Germany in the 1930s is one of the clearest historical examples of this dynamic—not because Russia today is ideologically similar, but because both cases show how a militarized industrial base can become self‑propelling. Once a state builds an economy around rapid military manufacturing—especially when focused on drones, advanced weapons systems, and munitions—well then there is pressure to maintain that tempo. The system begins to reward confrontation, not de‑escalation.
________________
China is the second pivot point in this converging drama. As early as 2021, US intelligence assessments noted that Beijing was openly integrating artificial intelligence into missions ranging from mass surveillance to cyber operations and autonomous weapons. Since then, China’s leadership has articulated a clear ambition which is to achieve global technological dominance and embed AI into every layer of national power. This is not merely modernization—it is a strategic doctrine built on the belief that information superiority, automated decision‑making, and cyber pre‑positioning will define the next era of conflict.
That same year, Nicolas Chaillan—the Pentagon’s first chief software officer—resigned in frustration, warning that the United States was falling behind in critical areas such as artificial intelligence and bioengineering. His departure reflected broader anxiety within defense and intelligence circles, where experts cautioned that the US might be underestimating both the speed and scope of China’s cyber and AI capabilities.
These concerns underscore a critical reality which is AI is no longer merely a scientific or commercial pursuit. It has become a geopolitical accelerant, reshaping the strategic environment faster than traditional institutions, treaties, or political systems can respond. Understanding AI’s trajectory now requires not only technical insight but also a clear view of the geopolitical forces driving its development.
This reality came into sharper focus in January 2025, when Brig. Gen. Doug Wickert, commander of the 412th Test Wing, delivered a sobering assessment of China’s expanding cyber and military capabilities. He reported that Chinese‑linked malware had been identified across critical US infrastructure—electrical grids, water systems, transportation networks, and even components of the nation’s air traffic control architecture. His remarks aligned with a growing intelligence consensus: these intrusions are not isolated incidents but part of a sustained strategic effort.
Over the past several years, US intelligence agencies have attributed numerous cyber operations to China, often referred to internally as “Typhoons.” Two major campaigns—Salt Typhoon and Volt Typhoon—were uncovered in 2023 and 2024, targeting critical infrastructure and government networks. These operations demonstrated a level of sophistication designed not merely for espionage but for pre‑positioning access that could be activated to disrupt or degrade US systems at scale.
Analysts note that these campaigns align with China’s broader doctrine of preparing the information environment in advance of potential conflict—ensuring that cyber capabilities can be deployed quickly and strategically if geopolitical tensions escalate. Salt Typhoon, for example, penetrated multiple US telecommunications networks, embedding cyber assets designed to remain undetected for long periods. Wickert emphasized that at least a dozen telecommunications companies had acknowledged infections linked to these operations. He also warned that China’s People’s Liberation Army is modernizing at an unprecedented pace, integrating cyber warfare, artificial intelligence, and autonomous systems into a unified military strategy.
The most dramatic warning came in late July 2025, when Microsoft disclosed a major cyberattack targeting several US government agencies, including the National Nuclear Security Administration (NNSA). The intrusion exploited a previously unknown software flaw—a “zero‑day” vulnerability—in on‑premises SharePoint servers. This flaw allowed attackers to bypass authentication entirely, slipping into systems without a password. The technique, nicknamed “ToolShell,” enabled the theft of cryptographic machine keys—digital credentials that allow systems to verify identity and trust internal communications. With those keys, an intruder could impersonate legitimate users, move laterally through connected networks, or access sensitive systems without detection.
The attack was attributed to Chinese state‑sponsored threat groups such as Linen Typhoon and Violet Typhoon. Although no classified nuclear data was confirmed lost, the breach raised serious concerns about credential compromise, lateral movement, and long‑term infiltration risks.
The message was unmistakable and that is the United States is being probed, mapped, and tested. The battlefield is being shaped long before any open conflict begins.
________________
As a third pivot point, the Middle East forms the third axis of this global convergence. This all the more apparent with the ongoing crisis with Iran as it emerges as one of the most volatile flashpoints in the international system. Over the past several years, US and allied intelligence agencies have warned that Iran and its network of proxy militias—Hezbollah in Lebanon, the Houthis in Yemen, and various Iraqi and Syrian groups—have dramatically expanded their use of drones, precision munitions, and cyber tools. These capabilities are no longer crude or improvised; they are increasingly sophisticated, networked, and strategically coordinated.
A turning point came as regional tensions escalated in 2023 and 2024, when Iranian-aligned groups began launching sustained drone and missile attacks against commercial shipping, US bases, and critical infrastructure across the region. What once required state-level resources can now be executed by small teams using inexpensive, AI-assisted systems. These attacks demonstrate a new reality: asymmetric actors can project power at a scale once reserved for nation-states.
Iran’s strategy extends beyond kinetic operations. Its cyber units have been linked to intrusions targeting energy facilities, government ministries, and transportation networks across the Gulf and beyond. These operations mirror a broader pattern seen in Russia and China: shaping the information environment long before open conflict. By probing defenses, mapping networks, and testing response times, Iran and its proxies are learning to exploit vulnerabilities in ways that blur the line between war and peace.
The United States has responded by increasing regional deployments, reinforcing air and missile defenses, and expanding intelligence-sharing with partners. Yet even with these measures, officials have expressed growing concern about the speed and adaptability of Iranian-aligned drone operations. Swarms of low-cost UAVs—often modified with commercially available components—have repeatedly challenged traditional air defense systems. In several instances, US and allied forces have had to expend expensive interceptors to defeat drones costing only a fraction of that price.
This dynamic reflects a deeper shift. The Middle East is no longer just a theater of conventional conflict; it has become a proving ground for the next generation of asymmetric, AI-enabled warfare. Iran’s proxies are experimenting in real time with tactics that combine autonomy, precision, and deniability. Each engagement becomes a data-gathering exercise, feeding back into a cycle of rapid iteration and adaptation.
The implications extend far beyond the region. Lessons learned in the Red Sea, the Levant, and the Gulf are shaping global military thinking. They reveal how quickly non-state actors can adopt advanced technologies, how vulnerable critical infrastructure has become, and how easily regional crises can escalate into broader confrontations when autonomous systems are involved.
The message is clear: the Middle East is no longer a peripheral arena. It is a central laboratory for the emerging era of algorithmic conflict—where drones, cyber tools, and AI-enabled systems are rewriting the rules of deterrence and destabilizing the balance of power. The Iran crisis is not an isolated flare-up; it is part of the same global transformation reshaping Russia, China, and the future of warfare itself.
________________
Even as Russia, China, and parts of the Middle East reshape the character of modern conflict, a fourth pivot point has been unfolding in parallel—less visible than battlefield innovations, but potentially far more consequential. Within defense and intelligence circles, this shift reflects growing concern that China is accelerating ahead in advanced AI platforms, and that artificial intelligence itself has become the central strategic variable shaping future conflict.
This development epitomizes the great convergence now underway. It did not arise as a formal program or doctrine, but rather emerged gradually as analysts, researchers, and strategists across the Pentagon, the intelligence community, DARPA, and the national laboratories recognized they were confronting the same challenge from different directions. Artificial intelligence, autonomous systems, cyber operations, and biological engineering were advancing at extraordinary speed—and increasingly influencing one another—yet no single office or institution was structured to study their combined effects. What began as scattered briefings, internal papers, and cross‑agency workshops slowly coalesced into a shared analytical framework.
This framework has no director, no budget, and no headquarters. It exists as an informal but increasingly influential way for existing institutions to coordinate their thinking around a technological shift too large, too fast, and too interconnected for traditional organizational boundaries. Its value lies not in bureaucratic authority, but in connecting developments that would otherwise be analyzed in isolation. That was the key mistake in 9/11. Departments did not coordinate information. That mistake must not be made with AI.
(Although some public reporting has linked “Genesis” to a Trump‑era executive order, that reference concerns a separate initiative that happens to share the name. When Donald Trump refers to Genesis, he is describing a formal executive‑branch program launched in November 2025—an AI‑driven scientific research effort known as the Genesis Mission. That program is distinct from the analytical Genesis framework discussed here, which did not originate from any presidential directive. It grew organically inside the technical and strategic communities long before the term appeared in political messaging.)
At its core, this framework reflects a growing recognition that the technologies transforming the battlefield in Ukraine are part of a far broader transformation—one that will define the future of warfare itself. It focuses on how four fast‑moving domains—AI, autonomy, cyber operations, and biological engineering—interact, amplify one another, and ultimately reshape national power. It raises questions traditional defense structures were never designed to answer. How does AI‑driven autonomy compress the tempo of war? What happens when cyber operations, drone swarms, and bio‑engineered tools evolve together? How do you deter an adversary whose capabilities are software‑defined, opaque, and constantly changing?
In this sense, the framework is less about predicting the future than acknowledging that the future has already arrived.
While public debate often fixates on what artificial intelligence is or is not, this way of thinking reflects a deeper strategic reality: software, data, and autonomous decision‑making are becoming as central to national power as nuclear physics was in the mid‑twentieth century. Just as nuclear technology reshaped geopolitics, AI is now doing the same. Its impact will not be abstract. It may rival the upheavals triggered by the atomic and thermonuclear revolutions. And like nuclear technology, AI cannot be put back in the bottle. Once unleashed, it becomes a permanent force shaping military strategy, economic competition, intelligence gathering, and global influence.
This is why leading intelligence agencies increasingly treat artificial intelligence not merely as a tool, but as a strategic domain in its own right. Nations that master it will shape the future. Those that fall behind will be shaped by it.
________________
All of this points to a growing fragility in deterrence, one of four models balanced on a strategic fulcrum. For nearly eight decades, global stability rested on a simple, brutal logic: the concept that nuclear deterrence works. It works because adversaries can see each other’s capabilities, understand the consequences of escalation, and trust that no side will misinterpret the other’s intentions. Nuclear weapons, for all their horror, created a terrible clarity. Their destructive power was visible, measurable, unmistakable. Deterrence depended on transparency—and on something else that is now eroding. That is rational leadership.
(We remind you of a line from our current book on Audible: “The Cold War remained largely cold because rational leaders, on all sides, remained rational. We may now be in a new age.” Quote from—The Sword of Damocles: Our Nuclear Age.)
The problem is that AI breaks this logic.
Artificial intelligence introduces opacity, speed, and unpredictability into military decision-making—qualities that erode the foundations of deterrence. Unlike nuclear arsenals, AI capabilities cannot be easily counted, monitored, or verified. They evolve in software, not silos. They can be upgraded overnight and deployed in ways that remain invisible until the moment they are activated.
This creates a world where nations may no longer know what their adversaries are capable of, how quickly those capabilities can change, whether an attack is imminent or simulated, or whether a system is acting under human direction or autonomous logic. The result is a strategic environment defined by radical uncertainty.
Speed becomes a destabilizing force. Traditional deterrence assumes time—time to detect, interpret, deliberate, and respond. AI compresses that timeline to seconds. Autonomous systems can identify targets, coordinate attacks, and execute operations faster than human decision-makers can even understand what is happening.
In such an environment, hesitation becomes vulnerability. Caution becomes risk. And the pressure to automate responses grows. This is the nightmare scenario defense planners quietly discuss. That is a world where autonomous systems interact at machine speed, escalating crises before diplomats or generals even understand what triggered them.
Then comes the problem of attribution. Deterrence depends on knowing who is responsible for an attack. But AI-enabled cyber operations blur attribution. A sophisticated intrusion may appear to come from one actor while being routed through dozens of compromised systems. Deepfake communications can mimic military orders or political directives. Autonomous malware can propagate without clear human initiation. If a nation cannot confidently identify the source of an attack, it cannot respond proportionally. And if it cannot respond proportionally, deterrence collapses.
AI also introduces a dangerous incentive structure—one that pushes nations toward first-mover advantage. Because AI capabilities are opaque, rapidly evolving, and difficult to verify, states may fear falling behind. That fear creates pressure to act preemptively: to launch cyber operations before an adversary hardens its networks, to disable satellites before they can be used against you, or to deploy autonomous weapons before an opponent’s systems mature. The strategic balance shifts from mutual vulnerability—the foundation of nuclear deterrence—to mutual uncertainty, a far more volatile condition.
That uncertainty erodes human judgment. And here lies the most destabilizing factor of all: the psychological dimension. Deterrence depends on leaders believing that no rational actor would initiate nuclear war. But the growing realization within defense circles is that Russia, China, or both together may be willing to use nuclear weapons to achieve strategic objectives. If even a single nuclear weapon is used, the entire architecture of deterrence collapses.
In truth, nuclear deterrence has always been fragile because it relies on the assumption that all leaders will remain rational under extreme pressure. So far, reason has held. Many argue that nuclear deterrence prevented a third world war. Even Nikita Khrushchev—famous for banging his shoe on a UN desk—remained rational when it mattered most. During the Cuban Missile Crisis, he and President Kennedy kept their composure, shaped by lived experiences during the second world war.
When we authors spoke at length with his son Sergei Khrushchev in 2017, he emphasized that his father’s restraint was rooted in that memory, particularly his experiences at Stalingrad.
Deterrence therefore has always relied on human judgment—leaders weighing risks, interpreting signals, and making decisions under pressure. But as AI systems become more capable, the temptation grows to delegate more and more of that judgment to machines.
The danger is not that AI will “decide” to start a war. The danger is that humans, overwhelmed by speed and complexity, will allow automated systems to shape decisions they no longer fully understand. In this sense, the fragility of deterrence in an AI age is not a technological problem but a human one. The world is entering a period where the pace of conflict may exceed the pace of comprehension.
Nowhere is that intersection of autonomy and strategic consequence more stark than in systems designed to operate at the highest levels of deterrence. Russia’s announced development of the Poseidon system, also reported as Status 6 or NATO’s Kanyon, illustrates how AI autonomy and strategic weapons can intersect in alarming ways.
First revealed publicly in 2018, Poseidon is described as a nuclear‑powered, undersea autonomous torpedo capable of long‑range operation and high speed at depth. Open‑source reporting and Western assessments attribute to it the potential to carry conventional or nuclear payloads and to be launched from modified submarines operating in Arctic and other waters. Some analysts have suggested scenarios in which a large‑yield warhead could be used to create catastrophic coastal effects and long‑lasting contamination if “salted” with cobalt. (The term “salted” denotes encasing a nuclear device with material (commonly cobalt‑59) that is transmuted by the explosion into a long‑lived radioactive isotope (cobalt‑60), intended to blanket and render areas unusable—hence the classical phrase “to salt the earth,” meaning to render an area unusable.)
Of course operational, and reliability challenges are still thought to be substantial before these weapons are fielded on a large scale.Though even where technical limits exist, the concept of an autonomous strategic weapon forces a rethinking of legal, ethical, and stability frameworks. This is no mere nuclear weapon and at a projected yield of one hundred megatons it goes beyond the concept of deterrence to the level of a crime agist humanity. The very concept of an autonomous, nuclear‑armed undersea weapon raises profound implications.
________________
Before concluding, it is essential to focus on a second model balanced on a strategic fulcrum. Drones and unmanned aerial systems are now the face of modern warfare. In 2023, the US Department of Defense launched the Replicator Initiative, aiming to deploy thousands of autonomous, low‑cost, expendable drones across air, land, and sea—war‑gaming swarms of AI‑enabled systems designed to overwhelm traditional defenses.
This marks a decisive break from decades of American military thinking. Instead of relying on a handful of large, exquisite platforms—aircraft carriers, stealth bombers, manned fighter jets—the Pentagon is investing in distributed, rapidly produced systems that operate at machine speed. For the first time, US military planning openly acknowledges that future battles may be shaped less by human‑piloted machines and more by coordinated fleets of autonomous robotic systems.
Replicator is not just a procurement program; it is a doctrinal shift—a recognition that the geometry of power is changing, and that the United States must adapt to a world where software, autonomy, and scale matter as much as steel.
This shift was reinforced by advances in air‑combat AI. DARPA’s Air Combat Evolution (ACE) program demonstrated that AI agents can outperform human pilots in high‑speed simulated engagements and, later, in real‑world flight tests. In a widely reported demonstration, an AI agent defeated an experienced Air Force pilot across a series of rapid dogfights, executing maneuvers with precision and reaction times beyond human capability.
ACE was not merely a technical milestone; it was a strategic signal. If AI can outfly trained pilots today, the future of airpower becomes almost unimaginable. Autonomous fighters, coordinated drone swarms, and near‑real‑time tactical decision‑making—executed with minimal human involvement—are no longer theoretical. They are redefining the role of the pilot. In short, the cockpit is no longer the center of air combat—the algorithm is.
The war in Ukraine has already demonstrated this reality. It is widely seen as the first major conflict in which AI and drones play central, everyday roles. A pivotal operation on June 1, 2025—known as Operation Spiderweb—involved the covert deployment of small FPV drones deep inside Russian territory to strike long‑range aviation assets at multiple air bases. Reports describe drones hidden in wooden cabins transported by trucks, remotely opened near launch points, and flown toward targets in coordinated waves. That engagement combined logistics, remote piloting, and preprogrammed autonomy.
What made Spiderweb so notable was its scale and method. It used smuggled launch platforms and large numbers of small drones to reach strategic targets previously thought out of range, demonstrating a new form of asymmetric strike that blends low‑cost hardware with sophisticated planning. Technical accounts note operators used cellular networks and ArduPilot‑style navigation, and when communications were lost, certain drones switched to preplanned or autonomous behaviors—suggesting onboard autonomy and limited AI features.
Both Russia and Ukraine now rely on AI‑assisted targeting tools, autonomous loitering munitions, and machine‑learning systems that sift satellite images, radio signals, and battlefield footage. Automated logistics and predictive analytics are central. Drones—once niche tools—have become the defining technology of the conflict. Ukraine uses AI‑enabled software to spot troop movements, predict missile trajectories, and coordinate large numbers of small, inexpensive drones that can overwhelm traditional defenses. Russia fields its own mix of autonomous systems, AI‑assisted electronic warfare that jams or hijacks incoming drones, and deepfake propaganda aimed at confusing soldiers and civilians.
Russian forces are now modifying their long‑range Geran‑2 drones to act as “motherships” carrying and releasing small FPV attack drones deep inside Ukraine. Drone expert Serhii “Flash” Beskrestnov and Ukrainian intelligence stress that this is no longer experimental—it is operational. Videos and wreckage show Geran‑family drones fitted with mounts for FPVs and launching them during flight. The concept is simple but dangerous: the Geran travels hundreds of miles, bypasses jamming, and then drops off cheap FPV strike drones over sensitive rear‑area targets. This extends the reach of Russia’s FPV fleet and adds another layer to its evolving drone strategy.
These Geran‑2 “motherships” are not fully autonomous. The Geran flies along pre‑programmed GPS waypoints, but the smaller FPV drones are controlled by human operators using first‑person‑view headsets and radio links to guide them toward specific targets such as energy infrastructure.
Ukraine, by contrast, fields more autonomous systems, has mastered more advanced AI‑assisted targeting, and continues to innovate at a faster pace.
We stress again, drones are the face of modern warfare!
________________
As AI transforms the physical battlefield, it is also reshaping the information battlefield in a third model balanced on a strategic fulcrum. This is the manipulation of information through social media. Now definitely an arena where perception, trust, and narrative control can matter as much as territory.
This is where deepfakes and generative media enter the picture. These tools can create highly realistic fake audio and video, making them powerful instruments of psychological and political manipulation. One early example came in 2022, when a deepfake video of President Volodymyr Zelensky urging Ukrainians to surrender briefly circulated online before being exposed as fake. Similar fabrications have targeted military leaders and public officials in other countries.
Incidents like these reveal a new vulnerability which is AI-generated media can erode trust, disrupt institutions, and create confusion during moments of crisis. As technology improves, intelligence agencies warn that deepfakes could be used to fabricate diplomatic statements, trigger false alarms, manipulate public opinion, sow chaos during military operations, and undermine confidence in legitimate leadership. In an era where information moves faster than verification, the ability to distort reality becomes a strategic weapon.
Deepfakes can also challenge public confidence. A recent example involved convincing deepfake videos of Professor Avi Loeb circulating on YouTube, delivering fabricated updates on the 3I/Atlas project—showing how easily trusted voices can be imitated. Intelligence agencies warn that adversaries could use deepfakes to impersonate political leaders, issue fake military orders, or manipulate financial markets during moments of tension. The danger is not only that people might believe a falsehood—it is that they may stop believing anything at all. That kind of widespread doubt creates an “information fog” that weakens democratic institutions, complicates crisis response, and erodes public trust.
________________
At the same time, the rapid advance of biological engineering is opening a new frontier of risk in the fourth model balanced on a strategic fulcrum. This is the most dangerous of all known to man. One even above nuclear weapons and the ultimate Armageddon scenario.
Tools that once required specialized labs and years of training are becoming cheaper, faster, and increasingly automated. AI‑assisted design platforms can now help users model proteins, optimize genetic constructs, or simulate pathogen behavior—dramatically lowering the barrier for experimentation. While these technologies hold enormous promise for medicine and agriculture, they also introduce a profound asymmetry. This means small groups or even individuals may soon wield capabilities that previously belonged only to nation‑states. The danger is not just deliberate misuse, but accidental release, poorly secured research pipelines, and the cascading effects of biological data manipulated or generated by AI systems. In a world already strained by geopolitical instability and information disorder, the convergence of AI and biological engineering represents a new class of risk. This is assuredly one that traditional deterrence frameworks were never built to contain.
________________
In conclusion, the Great Convergence is not a future scenario; it is the strategic environment of the present, and it is accelerating.
The world must confront this reality with clarity, humility, and urgency. The systems we build today will shape the boundaries of conflict tomorrow, and once unleashed, they cannot be recalled. The future of war is arriving faster than the world’s ability to understand it. The challenge now is whether humanity can adapt quickly enough to prevent the next great crisis from being one we do not fully comprehend until it is too late.
The most sensitive and dangerous domain is where these dynamics converge with nuclear command and control. Automation in nuclear systems is not new; since the Cold War, both superpowers have used computers to accelerate warning and assessment. As delivery systems grow faster, decision windows shrink. AI can strengthen early warning networks by reducing false positives and helping analysts sift vast streams of sensor data, yet it also creates the risk of overreliance on machine judgments for decisions with existential consequences.
The issue of AI and nuclear weapons is the mother of all concerns.
This tension drives ongoing debate over whether any nuclear launch authority should ever be automated. Proponents argue that AI could improve detection accuracy and reduce human error, while critics warn that misplaced trust in algorithms could trigger catastrophic mistakes. The United States has historically resisted delegating launch decisions to machines, emphasizing human judgment and centralized authority.
By contrast, Cold War–era Soviet systems such as Perimeter—often called “Dead Hand”—were explicitly designed as last-resort automated retaliation mechanisms. Built to guarantee a second strike if leadership and communications were destroyed, Perimeter monitored seismic, radiation, pressure, and communications sensors for signs of a nuclear attack. If it judged that human command was absent, it could transmit launch orders to surviving forces, including via a dedicated command missile. Public accounts suggest the system was fielded in the 1980s and maintained into the post-Soviet era, typically kept inactive except during crises or heightened alert.
US federal agencies are rapidly adopting AI to improve efficiency and resilience, but its role in nuclear command and control remains deeply contested. The Pentagon’s 2022 Nuclear Posture Review signaled growing interest in integrating AI into Nuclear Command, Control, and Communications (NC3). Gen. Anthony Cotton of US Strategic Command has argued that AI can strengthen decision making—so long as humans remain firmly “in the loop.” His comments sparked a public debate hosted by CSIS, where experts Sarah Mineiro and Paul Scharre took opposing positions. Mineiro supported expanding AI’s use in NC3 for tasks such as chip design, signal processing, and attack assessment modeling, while drawing a hard line against allowing AI any role in nuclear weapons release. Scharre countered that AI should not be used in NC3 at all, warning that it lacks human judgment, is vulnerable to manipulation, and cannot meet the zero-error standards required for nuclear operations. Despite their differences, both agreed on one point: AI cannot replace human decision making in nuclear affairs, and strict safeguards are essential.
History offers a stark reminder of why human judgment remains indispensable. On September 26, 1983, a Soviet early warning system falsely signaled that several US nuclear missiles were inbound. Lieutenant Colonel Stanislav Petrov faced a moment no one should ever confront. The alert insisted an attack was underway. Protocol demanded immediate reporting, because the logic of deterrence required swift retaliation. Yet Petrov sensed something was wrong. Trusting intuition over the machine’s apparent certainty, he judged the alarm a malfunction. In doing so, he broke protocol and risked severe punishment. But his decision prevented what could have become a nuclear exchange triggered by a simple technical glitch.
The lesson is unmistakable: When the system failed, it was a human—not a machine—who saved the world.
Epilogue:
We remind you all, AI has better purposes than fighting wars. When will the human race learn how to live together in peace and unity? Afterall, how hard is it to just get along? Here is a good example of the power of AI for the benefit of humanity:
By James Hall
St. Jude Children’s Research Hospital has taken a major step into the future of medical science. In partnership with the University of Toronto, researchers at St. Jude have become the first in the world to use quantum computing to successfully guide a drug‑discovery project that was later confirmed in real laboratory experiments. Their work, published in Nature Biotechnology, is already being recognized as a turning point—not only for quantum computing, but for the entire field of artificial intelligence.
As www.authorshall.com has been reporting, the future is here. AI and quantum computing is our new reality.
Drug discovery is one of the most difficult challenges in modern medicine. Scientists must sort through millions of possible molecules to find the few that might become safe and effective treatments. Classical AI models already help narrow the search, but even the most powerful traditional computers struggle to capture the full complexity of chemistry.
Molecules behave in ways that are deeply mathematical and often too intricate for classical systems to model with complete accuracy.
This is where quantum computing changes the picture. Quantum processors operate using the strange rules of quantum physics, allowing them to represent information in richer, more flexible ways. The St. Jude team built a hybrid architecture that allowed classical AI and quantum computing to work side by side. The classical AI model learned patterns in chemical data, while the quantum processor generated deeper, more expressive representations of the molecules being studied. When these quantum‑enhanced features were fed back into the AI model, its predictions improved substantially.
The most important part is what happened next. The team took the top candidates suggested by this hybrid system and tested them in the lab. Two of the molecules showed real, measurable promise.
This marks the first time that a quantum‑enhanced AI model has produced drug candidates that were validated experimentally—a milestone many in the field have been anticipating for years.
The breakthrough signals something larger than a single scientific achievement. It shows that quantum computing is no longer confined to theory or small‑scale demonstrations. It is beginning to enter real research pipelines, where it can influence decisions, guide experiments, and accelerate discovery. But perhaps the deeper story is that AI itself is now stepping into the quantum era. Instead of replacing AI, quantum processors are expanding what AI can see and understand, revealing patterns that classical machines cannot easily capture.
In simple terms, classical AI is like a powerful camera, and quantum computing is like adding a new lens that reveals details the camera could never see before. Together, they create a clearer picture—one that may lead to new medicines and new hope for children facing life‑threatening diseases.
St. Jude’s achievement is a glimpse of what the future of scientific discovery may look like, which represents classical intelligence and quantum intelligence working together, each amplifying the other. It is a quiet but profound shift, and it may be remembered as the moment when AI truly entered the quantum age.
Suggested Reading:
Allison, Graham. Destined for War: Can America and China Escape Thucydides’s Trap? Boston: Houghton Mifflin Harcourt, 2017.
Altman, Sam, Greg Brockman, and Ilya Sutskever. “Planning for AGI and Beyond.” OpenAI, 2023.
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press, 2014.
Brundage, Miles, et al. “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” Oxford University, 2018.
Crootof, Rebecca. “The Killer Robots Are Here: Legal and Policy Implications.” Cardozo Law Review 36, no. 5 (2015): 1837–1915.
Davis, Paul K., and Angela O’Mahony. Artificial Intelligence and National Security. Santa Monica, CA: RAND Corporation, 2019.
Fisher, Max. The Chaos Machine: The Inside Story of How Social Media Rewired Our Minds and Our World. New York: Little, Brown and Company, 2022.
Geist, Edward, and Andrew J. Lohn. How Might Artificial Intelligence Affect the Risk of Nuclear War? Santa Monica, CA: RAND Corporation, 2018.
Horowitz, Michael C. The Diffusion of Military Power: Causes and Consequences for International Politics. Princeton: Princeton University Press, 2010.
Kello, Lucas. The Virtual Weapon and International Order. New Haven: Yale University Press, 2017.
Kissinger, Henry, Eric Schmidt, and Daniel Huttenlocher. The Age of AI: And Our Human Future. New York: Little, Brown and Company, 2021.
Krepinevich, Andrew F. The Origins of Victory: How Disruptive Military Innovation Determines the Fates of Great Powers. New Haven: Yale University Press, 2023.
Lee, Kai‑Fu. AI Superpowers: China, Silicon Valley, and the New World Order. Boston: Houghton Mifflin Harcourt, 2018.
Lindsay, Jon R. Information Technology and Military Power. Ithaca: Cornell University Press, 2020.
Payne, Keith B. The Great American Gamble: Deterrence Theory and Practice from the Cold War to the Twenty‑First Century. Fairfax, VA: National Institute Press, 2008.
Scharre, Paul. Army of None: Autonomous Weapons and the Future of War. New York: W. W. Norton, 2018.
Singer, P. W., and Emerson T. Brooking. LikeWar: The Weaponization of Social Media. Boston: Houghton Mifflin Harcourt, 2018.
Singer, P. W., and August Cole. Ghost Fleet: A Novel of the Next World War. Boston: Houghton Mifflin Harcourt, 2015.
Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Alfred A. Knopf, 2017.
Tucker, Patrick. The Naked Future: What Happens in a World That Anticipates Your Every Move? New York: Current, 2014.
Waltz, Kenneth N. The Spread of Nuclear Weapons: More May Be Better. Adelphi Paper 171. London: International Institute for Strategic Studies, 1981.
Wright, Nicholas. “AI, China, and the Future of Geopolitics.” Foreign Affairs 97, no. 3 (2018): 44–52.
“We built autonomous systems to remove humans from danger, only to discover they also remove humans from judgment.”
Passage from The Sword of Damocles: Our Nuclear Age, by Michael and James Hall.
Art by James Hall.