Top AI & Tech News (Through November 9th)

Project Suncatcher ☀️ | $38B AWS–OpenAI Deal💸 | AI Twins 👤

Hello AI Citizens,

Here’s our focus this week: digital AI “twins” of people after they’ve passed away. These tools can comfort families, preserve stories, and teach future generations—but they can also cross lines if used without care. The biggest risks are no clear consent, confusing people about what’s real, and misusing private data.

What CAIOs should do:

  • Consent first. Only build an AI twin with written permission (or estate approval) that names what data can be used and for how long.

  • Label clearly. Always show that the experience is AI-generated; avoid lifelike tricks that could mislead.

  • Limit the data. Use the minimum needed; protect it with strong security and audit trails.

  • Set boundaries. No medical, legal, or financial advice; add “memorial modes” with gentle tone and no ads.

  • Honor families. Offer easy controls: pause, delete, share access, and yearly reviews with the estate.

  • Independent checks. Run ethics reviews and stress-tests for harm, deepfake abuse, and cultural sensitivity.

Handled this way, AI twins can preserve memories with dignity—not replace people, but help us remember them well.

Here are the key headlines shaping the AI & tech landscape:

  • AWS–OpenAI Sign $38B Multi-Year Compute Deal

  • UNESCO Sets Global Neurotech Rules With AI Front and Center

  • Google Moonshot: “Project Suncatcher” Puts AI Compute in Space

  • Anthropic Projects $70B Revenue by 2028: B2B Push, Leaner Models, Big Margins

  • Suzanne Somers ‘AI Twin’ Debuts: Consent-Led Digital Legacy

  • OpenAI Pushes to Extend Chips Act Tax Credit to AI Data Centers

Let’s recap!

AWS–OpenAI Sign $38B Multi-Year Compute Deal

OpenAI and AWS struck a multi-year partnership to run and scale OpenAI’s core AI workloads on AWS starting immediately. AWS will supply Amazon EC2 UltraServers with hundreds of thousands of NVIDIA GPUs (GB200/GB300) and the ability to burst to tens of millions of CPUs, with capacity targeted by end-2026 and room to expand into 2027+. The clusters—designed for both ChatGPT inference and next-gen model training—tap AWS’s large-scale, secure infrastructure (past 500K-chip clusters) and follow earlier moves to offer OpenAI open-weight models on Amazon Bedrock. Source: Amazon News (Nov 3, 2025).

💡 What this means for you: expect tighter GPU markets short-term and more capacity later. If you’re scaling AI, lock in multi-year reservations with price-drop protections and priority-access SLAs, keep a second provider warm (multi-cloud/failover), and benchmark latency and egress costs before shifting workloads. Keep data safe with clear residency, encryption, and no-train clauses; design for portability (containers/ONNX) so you’re not stuck if roadmaps slip. FinOps it: tag all AI spend, set per-team budgets, and auto-scale down idle agentic workloads.

UNESCO Sets Global Neurotech Rules With AI Front and Center

UNESCO adopted worldwide ethics standards for neurotechnology, explicitly shaped by recent AI advances that can decode brain and nervous-system signals and supercharge consumer neurotech (earbuds, AR glasses, wrist interfaces). The framework introduces a new data class—“neural data”—and pushes safeguards for “mental privacy” and “freedom of thought,” warning about AI-enabled risks from subliminal targeting to always-on biosignal capture. It lands amid a broader policy wave (WEF proposals, U.S. “Mind Act,” and state laws) and booming neurotech investment from Big Tech and startups. Source: The Guardian (Nov 6, 2025).

💡 For CAIOs (keep it simple & safe): treat neural data as sensitive-plus. Require explicit, revocable consent; default to on-device processing; minimize retention; ban advertising/profiling on neural signals; and add red-lines (no dream/subliminal use, no covert monitoring). Run DPIAs, log model inferences on biosignals, and align with “high-risk” AI controls (human oversight, robustness tests, incident playbooks). Bake these guardrails into vendor contracts (no-train clauses, data residency, audit rights) before piloting any AI-neurotech.

Google Moonshot: “Project Suncatcher” Puts AI Compute in Space

Google Research unveiled Project Suncatcher—a concept to run ML workloads on solar-powered satellite constellations equipped with TPUs and linked by high-bandwidth optical lasers. By flying in near-constant sunlight and tight formations (hundreds of meters apart), the system aims to deliver data-center-like throughput while easing land/power constraints. Early lab demos hit 1.6 Tbps per link; TPU radiation tests suggest viability in low-Earth orbit; and a 2027 learning mission with Planet will trial prototypes. If launch costs keep falling, Google argues space compute could be cost-comparable (per kW/year) to ground data centers in the 2030s. Source: Google Research Blog (Nov 4, 2025).

💡 Treat space compute as “edge-of-Earth cloud.” CAIOs should (1) map workloads that love abundant power + parallelism (simulation, model training bursts), (2) plan for data gravity: compress, pre-train, or filter at source before downlink, (3) bake in reliability assumptions (latency swings, link drops) and always keep a ground fallback, (4) update vendor clauses now—export controls, data residency, incident access—even for experimental space pilots, and (5) start with small proofs (sat imagery ML, weather/energy sims) to measure $/result and carbon per training run versus terrestrial options.

Anthropic Projects $70B Revenue by 2028: B2B Push, Leaner Models, Big Margins

Anthropic is reportedly targeting up to $70B in revenue and $17B in cash flow by 2028, driven by rapid enterprise adoption. The company is on track for $9B ARR in 2025 and $20–26B in 2026, expanding partnerships (Microsoft 365/Copilot, Salesforce) and rolling Claude out to large workforces (Deloitte, Cognizant). Product moves include smaller, cheaper models (Sonnet 4.5, Haiku 4.5), Claude for Financial Services, and Enterprise Search to connect internal apps. Gross margins are projected to rise from negative 94% (2024) → 50% (2025) → 77% (2028), with potential future fundraising at a $300–$400B valuation (last at $170B). Source: TechCrunch (Nov 4, 2025).

💡 Stress-test vendor strategy. 1) Run TCO and latency benchmarks across Anthropic/OpenAI/others for your top 5 workloads (assistants, RAG, code, search, analytics). 2) Negotiate usage tiers, egress discounts, and SLOs now—growth curves can shift pricing power fast. 3) Prefer portable patterns (OpenAPI/Inference APIs, vector standards) and keep multi-model fallbacks ready. 4) Track model refresh cadence (quality vs. cost of Sonnet/Haiku) and set quality gates tied to business KPIs (CSAT, handle time, risk flags). 5) Lock data governance: PII handling, no-train clauses, logging retention, and sector addenda (e.g., financial services).

Suzanne Somers ‘AI Twin’ Debuts: Consent-Led Digital Legacy

Two years after Suzanne Somers’ passing, her husband Alan Hamel unveiled an AI “Suzanne Twin” built from her 27 books and hundreds of interviews, designed to mirror her look, voice, and wellness advice. Hamel says Somers envisioned this decades ago (inspired by futurist Ray Kurzweil) and explicitly wanted a digital version to keep helping fans; a medical team verifies referenced health statements, and public access is planned via SuzanneSomers.com. The project contrasts with past, controversial “resurrections” because it’s consent-driven and rooted in Somers’ own words and wishes. Source: eWeek (Nov 4, 2025).

💡 Treat “AI afterlife” and synthetic personas as a governance topic, not a novelty. 1) Create a likeness & voice policy: require written consent, clear scope (where, how long), revocation rights, and estate approval. 2) Build provenance & disclosures: label synthetic content, log sources (books, interviews), and keep an audit trail. 3) Add risk checks: medical/legal review for advice, brand/reputation review, and cultural sensitivity review. 4) Negotiate platform guardrails: no training on generated likeness without permission; enforce takedowns for misuse. 5) Offer audience controls: let users opt into or out of AI persona interactions, and provide a feedback/report button.

OpenAI Pushes to Extend Chips Act Tax Credit to AI Data Centers

OpenAI sent a letter to the White House urging that the Chips Act’s 35% Advanced Manufacturing Investment Credit be expanded beyond fabs to include AI servers, data centers, and grid components. The company also asked for faster permits/environmental reviews and a strategic reserve of raw materials (copper, aluminum, rare earths) to speed U.S. AI build-outs. After a brief flap over “backstopping” loans, leadership clarified they’re not seeking government guarantees; Sam Altman added OpenAI expects >$20B ARR by end-2025 and has $1.4T in capital commitments over eight years. Source: TechCrunch (Nov 8, 2025).

💡If credits widen to data centers, total cost of AI compute drops—pressure test your build-vs-buy and multi-cloud plans now. Prioritize sites with power headroom and clear pathways for interconnection permits; line up long-term power (PPAs), and track material bottlenecks that can delay racks and transformers. Work with Finance on incentive stacking (tax credits, accelerated depreciation, state/local grants) and create a capex stage-gate for AI infra. Keep portability: design for model right-sizing and cloud/on-prem failover so policy shifts—or vendor delays—don’t stall your roadmap.

Congratulations to our September Cohort of the Chief AI Officer Program!

Sponsored by World AI X

Manju Mude (Cyber Trust & Risk Executive Board Member, The SAFE Alliance, USA)

Ommer Shamreez (Customer Success Manager, EE, United Kingdom)

Lubna Elmasri (Marketing and Communication Director, Riyadh Exhibitions Company, Riyadh, Saudi Arabia.)

Bahaa Abou Ghoush (Solutions Architect- Business Development, Crystal Networks, UAE)

Nicole Oladuji (Chief AI Officer, Praktikertjänst, Sweden)

Thomas Grow (Growth Consultant, Principal Cofounder, ThinkRack)

Samer Yamak (Senior Director - Consulting Services, TAM Saudi Arabia)

Nadin Allahham (Chief Specialist - Strategic Planning Government of Dubai Media Office)

Craig Sexton (CyberSecurity Architect, SouthState Bank, USA)

Ahmad El Chami (Chief Architect, Huawei Technologies, Saudi Arabia)

Shekhar Kachole (Chief Technology Officer, Independent Consultant, Netherlands)

Manasi Modi (Process & Policy Developer - Strategy & Excellence Government of Dubai Media Office)

Shameer Sam (Executive Director, Hamad Medical Corporation, Qatar)

About The AI Citizen Hub - by World AI X

This isn’t just another AI newsletter; it’s an evolving journey into the future. When you subscribe, you're not simply receiving the best weekly dose of AI and tech news, trends, and breakthroughs—you're stepping into a living, breathing entity that grows with every edition. Each week, The AI Citizen evolves, pushing the boundaries of what a newsletter can be, with the ultimate goal of becoming an AI Citizen itself in our visionary World AI Nation.

By subscribing, you’re not just staying informed—you’re joining a movement. Leaders from all sectors are coming together to secure their place in the future. This is your chance to be part of that future, where the next era of leadership and innovation is being shaped.

Join us, and don’t just watch the future unfold—help create it.

For advertising inquiries, feedback, or suggestions, please reach out to us at [email protected].

Reply

or to participate.