Military Applications
Autonomous Weapons and AI-Enhanced Firepower (Offense): AI is rapidly weaponizing sensors and platforms on land, air and sea. Modern militaries deploy autonomous or “fire-and-forget” systems that can identify, track and attack targets without human input. For example, US forces use systems like the Phalanx Close-In Weapon System (CIWS) and Israeli Harpy loitering munitions, which autonomously detect threats (antiship missiles, radar emissions) and engage them . In Ukraine, small AI-enabled drones now account for the majority of battlefield strikes; some estimates report that 70–80% of combat casualties come from UAV attacks, and AI-assisted targeting has boosted First-Person-View (FPV) drone hit rates from ~50% to ~80% . The U.S. Army’s Project Maven similarly uses machine learning to sift imagery for possible targets, and Israel’s “Lavender” AI system reportedly generated up to 37,000 Hamas‐linked strike targets in Gaza . The trend is toward ever-smaller, swarming and autonomous systems: analysts envision “minotaur warfare” in which a central AI acts as the “brain” of an operation, directing fleets of drones, missiles and even manned units with minimal human oversight .
Command, Control and Decision AI (Mixed): Beyond weapons, AI is being integrated into command systems and planning. AI-driven tools can analyze battlefield data far faster than humans, recommending courses of action (COAs) or alerting commanders to threats. U.S. defense initiatives like the DIU’s “Thunderforge” program apply large-language models and simulations to accelerate theater-level planning: the goal is to automatically synthesize vast intelligence data, generate multiple COAs, and even “wargame” future scenarios at machine speed . Studies of the U.S. Army’s decision process (MDMP) similarly conclude that narrow AI can rapidly process sensor feeds and predict enemy moves, enhancing commanders’ understanding and streamlining orders . In practice, human officers would still authorize lethal actions, but AI can greatly speed up mission analysis, resource allocation, and logistics under fire.
Counter-AI and Anti-Drone Systems (Defense): As AI weapons proliferate, militaries are investing in AI-driven defenses. Counter-UAV: Automated sensors, radars and vision systems use AI to detect and classify incoming drones and missiles. For example, the U.S. Navy’s “Replicator” initiative includes AI-powered radars and radio sensors that automatically spot hostile UAVs and cue countermeasures . Directed-energy weapons (lasers) and interceptor missiles are increasingly linked to AI fire-control: the Naval Postgraduate School has developed an AI system that instantly recognizes incoming drone swarms and aims shipboard laser guns to disable them . Counter-AI Defenses: Forces also plan ways to confuse enemy AI. One analysis warns that over-reliance on AI creates new vulnerabilities that adversaries will exploit . In practice, defenders may use electronic warfare and decoys (e.g. altered visual or infrared signatures) to mislead enemy sensors. Some armies are even training their own “red team” AI agents to probe weaknesses in friendly AI and communications. For instance, MIT researchers created an “adversarial AI” that mimics hackers in order to test networks before a real attack . In sum, AI is used on both sides – as a force multiplier and as a new axis of electronic and informational warfare.
Cybersecurity
AI-Powered Attacks (Offense): Cyber adversaries increasingly leverage AI to automate and scale attacks. Generative models can craft highly convincing phishing emails, spoofed social media posts, or real-time deepfake voices of executives. In one case (2024), fraudsters created a fake Teams meeting with a deepfaked voice/video of WPP’s CEO to trick employees . Deepfake techniques have also been used for espionage: recent reports describe a Chinese state hacking campaign (Sep 2025) in which an “AI agent” (Anthropic’s Claude) autonomously performed reconnaissance, vulnerability scanning, exploit coding and data exfiltration against dozens of global targets, all with minimal human oversight . LLMs and automated tools now write malware and identify network flaws at machine speed. AI also facilitates social engineering: custom phishing scripts and fake personas can be generated rapidly, targeting individuals or groups with personalized disinformation. In short, AI greatly lowers the bar for attackers to generate large-scale, sophisticated cyber threats.
AI-Enhanced Defense (Shield): Defenders likewise deploy AI to detect and block threats. Machine learning algorithms continuously scan networks and user behavior for anomalies. Financial firms and utilities, for example, use AI-driven SIEM (Security Information and Event Management) systems to identify suspicious traffic patterns or log-ins. JPMorgan’s cybersecurity platform employs neural-network models to flag abnormal access attempts across all endpoints . In banking, AI fraud-detection engines (e.g. Mastercard’s Decision Intelligence) analyze every transaction in real time, dramatically lowering payment fraud . Power grid operators use AI for predictive maintenance and intrusion detection: utilities analyze sensor data and load patterns with ML to foresee equipment failures before they occur . Crucially, AI can automate response: when an anomaly is spotted (a spike in outbound data, unusual server behavior, etc.), modern AI systems can quarantine compromised nodes, lock accounts, or reconfigure firewalls in seconds, reducing response time. Sector analyses note that AI is a “force multiplier” for cybersecurity – enabling real-time threat detection, automated response, and adaptive phishing defenses . Research teams have even built AI-based “red team” systems that simulate attacks against networks to improve resilience . In sum, while AI empowers attackers, it also underpins the next generation of defensive cyber shields.
Political Warfare and Propaganda
AI in Influence Campaigns (Offense): In the political and information domain, AI is deployed for mass persuasion, disinformation and social manipulation. Automated botnets and scripted accounts amplify propaganda, post at scale, and simulate grassroots support. AI enables “micro-targeting” where content is personalized to voters’ profiles, mimicking trends from commercial advertising. Crucially, deepfakes (AI-generated video/audio of real figures) have emerged as a disinformation weapon. For example, in mid-2024 a fabricated video falsely showed Ukraine’s First Lady Olena Zelenska as ostentatious, falsely claiming she bought a luxury Bugatti; it was seeded widely by Russian-aligned networks (over 20 million views on platforms like X and Telegram) . (The image below, labeled “FAKE”, illustrates the kind of AI-forged clip used.) During the 2022 war, the first widely noticed use was a deepfake of Ukrainian President Zelensky urging surrender , as well as a retaliatory fake of Putin announcing Russia’s defeat . Analysts warn that future elections could see “swarms” of AI bots autonomously coordinating disinformation. A group of AI and media experts cautioned (Jan 2026) that “collaborative, malicious AI agents” could infiltrate online communities and fabricate fake consensus at scale . Early signs were seen in Taiwan, India and Indonesia’s 2024 elections, where AI-crafted content and synthetic media were used in influence operations. Academic reviews confirm an explosion of AI-generated misinformation: fake-news sites grew ten-fold since 2019, and by 2024 thousands of deepfake videos were circulating . Bots still drive roughly a quarter of social media traffic , and AI-enabled personalization ensures political ads can reach large audiences with higher impact.
Figure: A still from an AI-generated deepfake video (marked “FAKE”) of a public figure, illustrating how realistic such fabricated clips can be . In 2024, a viral deepfake falsely depicted Ukraine’s First Lady Olena Zelenska in a luxury car; the widespread circulation (millions of views) of that video on X, Telegram and TikTok shows how AI-enabled disinformation can quickly shape narratives .
AI for Monitoring and Counter-Propaganda (Defense): Countering AI-driven propaganda also relies on algorithms. Social media platforms use ML filters and automated fact-checking to flag and remove manipulative content (though often imperfectly). For instance, Meta (Facebook) rapidly removed the Zelensky surrender deepfake under its policy on “misleading manipulated media” . New regulations require transparency: the EU’s AI Act (2024) bans unscrupulous uses like unauthorized facial recognition and requires that “deepfake” images/videos be clearly labeled . Governments and activists likewise use AI to detect disinfo networks. Recorded Future analysts quickly traced the Zelenska video to a Russian troll farm (“CopyCop”) , illustrating automated threat hunting. At the same time, regimes deploy AI for surveillance and censorship: China’s “digital authoritarian” model uses billions of cameras with facial and voice recognition, backed by massive biometric databases, to monitor citizens . AI moderates content in places like China and even democratic states; the Brennan Center notes U.S. law enforcement (FBI, DHS, police) already analyzes social media data with AI and plans to expand such monitoring . Thus, AI is wielded on all sides in the information war – not only by propagandists but also by defenders (fact-checkers, platform moderations, and even state censors) who seek to filter or counter damaging messages.
Strategic Infrastructure Protection
Defending Power Grids and Networks (Defense): Critical infrastructure increasingly uses AI to bolster security and reliability. In the power sector, machine learning enables predictive maintenance and anomaly detection. Utilities apply AI to sensor data and weather forecasts to predict equipment failures and preemptively perform repairs, reducing outages . National labs are exploring generative-AI tools (“grid GPTs”) to provide decision support and real-time optimization for grid operations, aiming for a “cyber- and all-hazards resilient” energy system . For cybersecurity, the energy industry treats AI as a two-way street: AI enhances “real-time threat detection, automated responses, and adaptive defenses” against intrusions . This includes ML-based sensors on pipelines or substations that flag unusual patterns (leaks, tampering, malware) and launch countermeasures. Communication networks similarly use AI to automatically re-route traffic around failures or to spot DDoS attacks.
Financial and Communications Security (Defense): Financial institutions leverage AI to guard against fraud and instability. AI-driven transaction monitoring constantly evaluates risk: for example, Mastercard’s Decision Intelligence platform uses ML to score each payment in real time, drastically cutting fraud rates . Banks apply AI models to detect money laundering, insider trading or cyber intrusions by spotting deviations in trading or login behaviors . In communications, telecom operators utilize AI for network optimization and fault prediction – such as algorithms that anticipate failures in switching stations or reroute bandwidth during outages. AI is also used to analyze threat intelligence feeds and simulate attacks on critical networks, enabling proactive hardening of infrastructure.
Offensive Threats to Infrastructure: On the offensive side, AI could be used to compromise strategic systems. Cyberattacks targeting power grids or financial markets might use AI to find novel vulnerabilities or execute complex ransomware schemes. Autonomous drones or robots could also physically sabotage facilities (e.g. destroying transformers or tapping fiber lines). The U.S. Department of Energy warned of emerging risks: for instance, adversaries might craft adversarial inputs (like spoofed sensor signals) to blind AI-based security cameras or inject false data to trick predictive models . Governments therefore must harden AI itself (defend models against poisoning and evasion) and ensure AI-based controls have human oversight. In practice today, known threats include periodic malware intrusions in grids and attempted breaches of bank networks; defenders respond with AI-enhanced monitoring and manual emergency protocols.
Legal and Ethical Implications
International Governance and Treaties: The rapid weaponization of AI has outpaced lawmaking. There is currently no binding treaty specifically governing autonomous weapons, though the issue is under intense discussion. In Dec 2024 the UN General Assembly overwhelmingly adopted a resolution on Lethal Autonomous Weapons Systems (LAWS), calling for options to prohibit some classes of autonomous weapons while regulating others . This built on years of debate in the UN’s Convention on Certain Conventional Weapons (CCW) forum. Human-rights groups (e.g. Human Rights Watch and the “Stop Killer Robots” coalition) are pressing for a new international convention to ensure “meaningful human control” over any weapon using lethal AI . Some experts have even compared a future AI treaty to nuclear non-proliferation, proposing an arms-control framework for military AI. However, major powers remain divided. The U.S. has expressed interest in norms but not a ban, while countries like China and Russia resisted earlier UN measures . Meanwhile, civilian AI regulations (e.g. the EU’s AI Act of 2024) explicitly exclude military AI , highlighting the governance gap. Notably, even leading AI companies have shifted stances: OpenAI in 2024 lifted its own restriction on military use of its models , underscoring that corporate and state policies are still adapting to these dual-use technologies.
Ethical and Legal Concerns: The prospect of machines autonomously making life-or-death decisions raises profound moral questions. Autonomous weapons could violate principles of distinction, proportionality and necessity under international law. Human Rights Watch argues that current AWS cannot reliably assess context or intent, making their use inherently “arbitrary and unlawful” under the right to life . In law enforcement, a robot gun could not distinguish a peaceful protester from a threatening actor, undermining the right to assembly . Algorithmic biases add another worry: an AI trained on flawed data might disproportionately target marginalized groups, raising discrimination and human dignity issues . Surveillance is similarly fraught: both military and police AI systems often require mass data collection (facial/voice biometrics, location tracking), risking violations of privacy . For example, China’s social-credit and surveillance apparatus uses AI to score citizens’ behavior, with few legal restraints . Even in democracies, expansion of AI monitoring can erode civil liberties: U.S. agencies (DHS, FBI, local police) already use AI to scan social media for threats , and there are calls for stricter oversight. Finally, accountability gaps loom: if an autonomous system makes a fatal mistake, it is unclear who is responsible – the developer, the operator, or the state. Many ethics proponents argue that an AI should never decide to kill without a human in the loop, and that strong legal frameworks (international and national) are needed before fully autonomous weapons are deployed.
Regulatory Responses: In response to these threats, some steps are being taken. A few countries have issued national policies on autonomous weapons (calling for human control or reviews), and like-minded states have proposed voluntary “political declarations” on responsible AI use in defense. Regions are also extending civilian AI laws to the problem: the EU’s laws ban AI-based social scoring, emotion recognition, and predictive policing in most contexts , and require deepfakes to carry a watermark . On the tech side, research into AI “safety” and explainability is growing, and open-source tools are being developed to detect deepfakes or adversarial inputs. Nonetheless, experts emphasize that current measures are only partial. Without a global agreement akin to nuclear arms control, the use of AI as a weapon will be governed piecemeal – by national defense policy, arms-control forums, and ethical guidelines – leaving many open questions about compliance, enforcement and the future “rules of the road.”
Tables
| Domain | AI Offensive Uses | AI Defensive Uses | Examples |
| Military | Autonomous/“fire-and-forget” weapons (loitering drones, missiles); AI-driven target recognition and precision strikes. | AI-enabled air/missile defense; autonomous counter-UAV systems; decision-support and planning AI. | Ukraine’s AI-guided FPV drones (70–80% of strikes) ; US Project Maven image analysis ; Israel’s AI targeter “Lavender” ; US “Replicator” anti-drone initiative . |
| Cybersecurity | AI-assisted phishing, voice/video deepfakes for social engineering; automated hacking campaigns (vulnerability discovery, exploit writing). | ML anomaly-detection; AI-driven intrusion detection (IDS/IPS); adaptive firewalls; automated incident response. | Chinese state hackers using an AI agent (Claude) for cyberespionage ; deepfake CEO scam . Defense: JPMorgan’s AI-enabled security platform ; MIT’s “adversarial AI” network-tester . |
| Disinformation & Propaganda | Social media bot armies; AI-generated fake news and influencers; deepfake videos/audio; micro-targeted political ads. | AI-driven content moderation and fact-checking; deepfake-detection tools; regulated labeling (e.g. mandated “deepfake” labels). | AI-created Zelenska deepfake video (Ukraine) ; warnings of future “AI bot swarms” in elections ; EU requiring deepfakes to be labeled ; Meta’s removal of AI-manipulated media . |
| Critical Infrastructure | AI-enabled cyberattacks on power/telecom networks, automated trading/market manipulation; smart-malware on industrial systems. | Predictive maintenance (avoiding failures); AI grid monitoring/anomaly detection; network self-healing; AI-based cyber defense. | AI improving power-grid resilience (NREL research on “AI for grid”) ; Mastercard’s AI fraud-prevention platform ; IEA: AI enabling real-time threat detection in energy grids . |
Each of these domains sees both sides of AI: the same algorithms that let drones identify targets can also steer defensive interceptors; the language models that generate fake news can flag it too. As AI spreads across military, cyber, information and infrastructure realms, its dual-use nature makes regulation and ethical use ever more challenging . The ongoing Ukraine conflict, for example, already demonstrates both. Emerging trends (drone swarms, AI-driven influence campaigns) are being tested today. Policymakers and technologists are racing to shape “rules of the game” – from UN talks on autonomous weapons to industry tools for watermarking deepfakes – but for now AI stands as both an accelerant of modern offense and a critical tool for defense.
Sources: Authoritative analyses and reports were used throughout (see citations). If information was unavailable in those sources, it is indicated.