- The AI Citizen
- Posts
- Top AI & Tech News (Through March 2nd)
Top AI & Tech News (Through March 2nd)
Perplexity Computer 🖥️ | Teens x AI 📱 | Flying Saucer 🛸

Hello everyone!
The war between the United States and Iran continues to unfold, with shifting alliances, cyber operations, and escalating strategic uncertainty shaping the global landscape. Modern conflict is now defined by intelligence systems, predictive analytics, autonomous platforms, and decision-support algorithms operating behind the scenes.
AI is no longer adjacent to warfare. It is increasingly embedded within it.
❓ Question for the Week
If AI can now influence battlefield intelligence, targeting systems, and national security decisions, who ultimately holds responsibility when machines shape matters of life and death?
This week’s most important AI story is about control.
🔍 This Week’s Big Idea: When AI Governance Meets Geopolitics
The public dispute between the Trump administration and Anthropic — followed by OpenAI stepping into a Pentagon deal — is a defining moment in the political economy of AI. At the center of the conflict was a philosophical line: Should AI companies retain restrictions preventing use in mass domestic surveillance and fully autonomous weapons? Or must frontier models be available for “all lawful purposes” when national security is invoked?
AI is becoming military infrastructure. It can accelerate intelligence analysis, simulate outcomes, prioritize targets, and compress decision cycles. But it also raises profound questions:
If AI systems recommend lethal action, where does accountability reside?
If safeguards slow deployment, are they protecting humanity or weakening national defense?
If companies define ethical red lines, do they serve democracy — or overstep it?
For Chief AI Officers, this moment signals a structural shift. AI governance is no longer just a compliance function. It is a geopolitical variable.
The apocaloptimist CAIO understands that AI can strengthen national resilience, cybersecurity, logistics, and strategic defense while simultaneously recognizing the existential risks of autonomy without oversight.
⭐ This Week’s Recommendation
Run a “Strategic Escalation Audit.”
For any AI system operating in security-sensitive, high-liability, or public-facing domains, ask:
If this system’s output directly influences critical decisions, who signs off?
If it fails, is accountability traceable?
If geopolitical pressure increases, do our safeguards hold?
If the answers are unclear, the system is not production-ready — no matter how powerful it is.
⚠️ Closing Question to Sit With
As AI becomes embedded in warfare, national security, and geopolitical power, will you treat governance as optional friction or as the architecture that determines whether AI becomes a force for stability or escalation?
Here are the stories for the week:
Trump Orders Ban on Anthropic as OpenAI Secures Pentagon Deal
Anthropic: DeepSeek, Moonshot, and MiniMax Ran Large-Scale “Distillation” Campaigns Against Claude
China Showcases Ton-Class ‘Flying Saucer’ eVTOL for Urban Logistics and Rescue
Perplexity Launches “Perplexity Computer,” a Multi-Model AI Workflow System
China’s Humanoid Robot Industry Is Winning the Early Market
Pew Research: How Teens Use and View AI

Trump Orders Ban on Anthropic as OpenAI Secures Pentagon Deal
President Trump has directed all federal agencies to cease using Anthropic’s AI products and the Pentagon has designated the company a national security supply chain risk, escalating a dispute over AI safeguards. The conflict centers on Anthropic’s refusal to remove restrictions preventing its model, Claude, from being used for mass domestic surveillance or fully autonomous weapons, under a Defense Department contract worth up to $200 million.
Within hours of the announcement, OpenAI revealed it had secured a deal to provide AI systems for classified Pentagon networks, stating that its agreement includes similar red lines around domestic surveillance and autonomous weapons with human oversight. Anthropic said it plans to challenge the designation in court, calling the move legally unsound and warning it sets a dangerous precedent for AI governance and federal contracting. Source: NPR
💡 Why it matters (for the P&L):
This marks a turning point in the relationship between AI vendors and government clients. Defense contracts can unlock massive revenue, but policy disputes can quickly escalate into blacklist risk, regulatory exposure, and reputational volatility—especially for companies approaching IPO. For enterprises, vendor instability in politically sensitive sectors can create operational and continuity risk.
💡 What to do this week:
Conduct a vendor dependency review. Identify where your AI stack relies on providers exposed to geopolitical or regulatory conflict, and assess contingency plans. For high-stakes deployments, ensure contractual clarity around usage rights, policy limits, and termination scenarios before they become flashpoints.

Anthropic: DeepSeek, Moonshot, and MiniMax Ran Large-Scale “Distillation” Campaigns Against Claude
Anthropic says it detected industrial-scale efforts by three AI labs—DeepSeek, Moonshot, and MiniMax—to illicitly extract (“distill”) Claude’s capabilities using fraud at scale. According to Anthropic, the campaigns generated 16+ million Claude exchanges through roughly 24,000 fraudulent accounts, targeting high-value skills like agentic reasoning, tool use, and coding, often via proxy networks designed to evade detection.
Anthropic argues the stakes go beyond competitive copying: illicitly distilled models may strip safety safeguards and could be repurposed for surveillance, disinformation, or cyber operations, undermining export-control goals and accelerating capability spread without equivalent controls. The company says it is strengthening detection, access controls, intelligence sharing, and model/product countermeasures—while calling for coordinated action across labs, cloud providers, and policymakers. Source: Anthropic Announcements
💡 Why it matters (for the P&L):
Model theft is becoming a real cost center: higher security spend, higher fraud/abuse load, and faster commoditization of differentiated capabilities. For businesses building on frontier models, rising distillation pressure can also translate into tighter access controls, more friction in onboarding, and increased vendor risk—especially in regulated or cross-border environments.
💡 What to do this week:
Run a “Model Security & Leakage Audit.” Review how your teams expose model outputs (APIs, logs, support tickets, shared prompts), tighten authentication and rate-limits, and implement abuse monitoring for suspiciously repetitive, high-volume querying patterns. If you operate internationally, reassess reseller/proxy exposure and ensure contracts clearly address misuse, fraud response, and incident reporting.

China Showcases Ton-Class ‘Flying Saucer’ eVTOL for Urban Logistics and Rescue
China has unveiled what it describes as the world’s first ducted, ton-class “flying saucer” eVTOL aircraft, featuring enclosed rotors and a maximum payload of 450 kg (992 lbs). Demonstrated in Wuhan, the aircraft is designed for low-altitude urban operations, including logistics and aerial rescue, and can reportedly take off within three seconds while operating close to buildings.
The showcase included multiple eVTOL models, including a hybrid tilt-rotor with a claimed 1,000 km (620-mile) range and an emergency-response “micro-intensive care unit” variant. The announcement aligns with China’s broader push into the “low-altitude economy,” backed by regulatory reforms and infrastructure expansion, as manufacturers race toward certification and commercialization in what industry leaders call a pivotal year for deployment. Source: Interesting Engineering
💡 Why it matters (for the P&L):
Urban air mobility is shifting from prototype to policy-backed commercialization, especially in China. Companies in logistics, emergency services, infrastructure, and advanced manufacturing should expect new competitive dynamics, regulatory frameworks, and supply chain opportunities as low-altitude aviation scales.
💡 What to do this week:
If you operate in logistics, smart cities, or infrastructure, assess how low-altitude transport could affect cost structures and last-mile delivery models over the next 3–5 years. Monitor certification timelines and regional pilot programs that could create early partnership or procurement opportunities.

Perplexity Launches “Perplexity Computer,” a Multi-Model AI Workflow System
Perplexity has introduced Perplexity Computer, a new AI system designed to orchestrate full workflows rather than simply answer prompts. Users describe an outcome, and the system decomposes it into tasks and subtasks, deploying specialized sub-agents to conduct research, generate documents, process data, call APIs, and coordinate outputs asynchronously. Each task runs in an isolated environment with access to browsers, filesystems, and connected services—effectively positioning the platform as a general-purpose digital worker.
Unlike single-model systems, Perplexity Computer relies on multi-model orchestration, assigning different frontier models to specific strengths such as reasoning, research, image generation, video creation, and long-context recall. Initially available to Perplexity Max subscribers, the launch signals a broader industry shift from chat interfaces toward persistent, autonomous AI systems capable of managing workflows over extended time horizons. Source: Perplexity
💡 Why it matters (for the P&L):
Competitive advantage is shifting from “who has the best model” to “who controls the best orchestration layer.” Multi-model workflow systems can unlock significant productivity gains, but they also introduce new cost exposure (token spend, API usage), governance complexity, and dependency risks. Enterprises that manage orchestration strategically will scale efficiency; those that don’t may face runaway automation costs and fragmented oversight.
💡 What to do this week:
Select one high-friction, multi-step workflow—such as market intelligence reporting, vendor risk review, or financial analysis—and map its task dependencies. Evaluate where orchestration-based AI could automate execution while maintaining human checkpoints. Define cost ceilings, model selection rules, logging standards, and escalation triggers before scaling deployment.

China’s Humanoid Robot Industry Is Winning the Early Market
China’s humanoid robotics sector is pulling ahead in early commercialization, driven by a powerful hardware supply chain, manufacturing scale, and state-backed industrial strategy. Companies like Unitree and Agibot are shipping significantly more units than U.S. rivals, with Chinese firms dominating 2025 shipment rankings. Industry analysts cite China’s EV-driven component ecosystem—batteries, sensors, motors—as a major advantage enabling rapid iteration and lower production costs.
While global humanoid shipments remain small (just over 13,000 units last year), projections suggest steep growth toward millions of units by 2035. Chinese firms are now shifting from demo-driven showcases to operational deployments in manufacturing, logistics, and retail. However, software autonomy remains immature, with most systems reliant on Nvidia chips and simulation-generated training data. Safety, regulation, and real-world reliability remain key hurdles. Source: TechCrunch
💡 Why it matters (for the P&L):
Humanoid robotics is moving from spectacle to industrial infrastructure. China’s speed-to-scale advantage could reshape global manufacturing, warehouse operations, and service automation economics. Enterprises that ignore embodied AI risk cost disadvantages as robotics-driven productivity accelerates—especially in labor-constrained sectors.
💡 What to do this week:
Identify one labor-intensive, repetitive workflow in operations or logistics and assess whether humanoid or advanced robotics pilots could reduce long-term labor dependency. Begin tracking robotics suppliers, chip dependencies, and regulatory trends—particularly if your supply chain touches Asia-Pacific manufacturing hubs.

Pew Research: How Teens Use and View AI
A new Pew Research Center report finds that 64% of U.S. teens (13–17) have used AI chatbots, and about three-in-ten use them daily. The top uses are finding information (57%) and help with schoolwork (54%), followed by fun/entertainment (47%). While most teens don’t use chatbots for personal support, 16% say they’ve used them for casual conversation and 12% say they’ve gotten emotional support or advice.
In school, 1 in 10 teens say they do all or most of their schoolwork with chatbot help, and 59% believe students at their school use AI to cheat at least sometimes (including roughly a third who say it happens very often). Teens are also more optimistic about AI’s impact on their own lives (36% positive vs. 15% negative) than on society (31% positive vs. 26% negative). Awareness is high—over 9 in 10 have heard about AI chatbots—yet confidence varies, with about a quarter saying they’re very confident using them. Source: Pew Research Center
💡 Why it matters (for the P&L):
AI literacy is becoming a real productivity divider—and it’s starting early. Teen adoption signals the next workforce will arrive already using AI, but with uneven skill, trust, and ethics habits (including normalization of “AI-assisted cheating”). Organizations that set standards for AI use, verification, and disclosure will reduce downstream risk, rework, and reputational exposure—especially in education-adjacent and early-career talent pipelines.
💡 What to do this week:
Run a “Youth/Entry-Talent AI Use Baseline” in your org: update onboarding and internship policies to define what counts as acceptable AI assistance, what requires citation/disclosure, and what must be verified. Then build a short training module on prompting + verification + responsible use so new hires don’t bring “school AI habits” into high-stakes workflows.

Sponsored by World AI X Eric Salveggio Ghassan P.Kebbe Jerry Pancini Muttaz Alshahrani Ravikiran Karanam |
About The AI Citizen Hub - by World AI X
This isn’t just another AI newsletter; it’s an evolving journey into the future. When you subscribe, you're not simply receiving the best weekly dose of AI and tech news, trends, and breakthroughs—you're stepping into a living, breathing entity that grows with every edition. Each week, The AI Citizen evolves, pushing the boundaries of what a newsletter can be, with the ultimate goal of becoming an AI Citizen itself in our visionary World AI Nation.
By subscribing, you’re not just staying informed—you’re joining a movement. Leaders from all sectors are coming together to secure their place in the future. This is your chance to be part of that future, where the next era of leadership and innovation is being shaped.
Join us, and don’t just watch the future unfold—help create it.
For advertising inquiries, feedback, or suggestions, please reach out to us at [email protected].
Reply