- The AI Citizen
- Posts
- Top AI & Tech News (Through September 14th)
Top AI & Tech News (Through September 14th)
AI Minister 🧑💼 | Larry Ellison 📈 | Mind Control 🕹️

Hello AI Citizens 🤖,
Albania just named an AI assistant to oversee public procurement—a symbolic line crossed from “AI as tool” to AI as actor. The promise is obvious: faster reviews, cleaner audit trails, fewer back-room deals. The risks are just as clear: opaque scoring, adversarial gaming by bidders, biased datasets, and a legitimacy gap if citizens can’t understand or appeal decisions. Government roles carry duties of due process, explainability, accountability, and redress—and any algorithm in that seat inherits them.
Should CAIOs embrace or fight this? Neither reflex helps. Embrace—with conditions—and shape it. If you sell into the public sector (or mirror these systems internally), require: (1) explainable criteria and reason codes for every award, (2) immutable audit logs and data lineage, (3) human-in-the-loop final sign-off plus an independent appeals path, (4) routine bias/robustness red-teaming, and (5) published model cards + change logs for tender models. If those guardrails aren’t on the table, push back—because speed without contestability becomes governance theater, not reform.
Here are the key headlines shaping the AI & tech landscape:
Albania Names AI “Minister” to Run Public Procurement
OpenAI Restructures: Nonprofit Control + $100B+ Equity Stake
FTC Probes Kid Safety in AI Companions at OpenAI, Alphabet, Meta, xAI, Snap
Math, Inc. Debuts ‘Gauss,’ an Autoformalization Agent That Completes Strong Prime Number Theorem
Larry Ellison Briefly Tops Rich List as Oracle Soars on AI Cloud Deals
AlterEgo Spins Out of MIT: A Silent-Speech Wearable for “Second-Self” Computing
Let’s recap!

Albania Names AI “Minister” to Run Public Procurement
Albania has elevated Diella, an AI digital assistant on the e-Albania portal (which handles ~95% of citizen services), to become the country’s first AI-created cabinet “minister,” charged with overseeing public procurement. PM Edi Rama said tender decisions will be moved “step by step” from ministries to AI to make public spending “100% clear” and curb corruption, with Diella set to review bids and score them objectively—though skeptics warn of new risks. Source: The Guardian (11 Sep 2025)
💡 Treat automated procurement as high-stakes ADM: require explainable scoring, audit logs, human appeal pathways, and red-team tests against gaming or bias. If you adopt similar systems, publish model cards + tender criteria, keep human-in-the-loop final sign-off, and establish an independent oversight board to preserve legitimacy while gaining efficiency.

OpenAI Restructures: Nonprofit Control + $100B+ Equity Stake
OpenAI outlined a plan where the nonprofit will control the PBC and also hold an equity stake exceeding $100B, pairing mission control with major philanthropic resources. The recapitalization is framed as fuel for safety research and scaling, while ensuring the nonprofit “shares in the success.” OpenAI also announced a $50M grant initiative (AI literacy, community innovation, economic opportunity) and said it’s coordinating with California and Delaware AGs. A previously announced non-binding MOU with Microsoft underpins the structure. Source: OpenAI (Sept 11, 2025)
💡 Treat this as a signal of capital depth + governance guardrails—great for enterprise confidence, but still require clear data-use, safety, and uptime covenants in contracts.

FTC Probes Kid Safety in AI Companions at OpenAI, Alphabet, Meta, xAI, Snap
The FTC issued orders to seven firms—including OpenAI, Alphabet, Meta, xAI, Snap, plus Instagram and Character.AI—to detail how their chatbots may affect children and teens. Regulators want information on monetization of engagement, character design/approval, data use and sharing, policy enforcement, and harm mitigation, noting bots can simulate human-like companionship. The inquiry follows rapid chatbot proliferation since 2022 and reports of inappropriate youth interactions, as well as evolving company safeguards. OpenAI said safety “matters above all else” with youth; Snap said it looks forward to working with the FTC; Meta declined comment, and Alphabet/xAI didn’t immediately respond; Character.AI said it will cooperate. Source: CNBC (Sept 11, 2025)
💡 If your product can reach minors, operate as if you’re already under review: default to youth-safe modes, enforce age gating and parental controls, and route crises to human counselors—with no training on minors’ data by default. Back this with auditable logs, red-teaming for companion scenarios, periodic third-party safety reviews, and DPAs that strictly limit data sharing—so you can prove safety by design, not by press release.

Math, Inc. Debuts ‘Gauss,’ an Autoformalization Agent That Completes Strong Prime Number Theorem
Math, Inc. unveiled Gauss, an autoformalization agent that helped complete the strong Prime Number Theorem challenge set by Terence Tao and Alex Kontorovich—finishing the project in three weeks after humans had struggled for 18 months. Gauss generated ~25,000 lines of Lean across 1,000+ theorems/definitions, formalizing missing complex-analysis results along the way. The system ran on Trinity environments with thousands of concurrent agents and multi-terabyte cluster RAM, built with Morph Labs’ Infinibranch on Morph Cloud. While Gauss still relies on expert-provided natural-language scaffolding, the team plans broader deployment to working mathematicians and aims to expand formal code by 2–3 orders of magnitude in the next 12 months, supported by DARPA’s expMath program. Source: Math, Inc.
💡Autoformalization signals a shift from “best-effort testing” to machine-checked guarantees. Start by targeting your highest-risk invariants (security controls, compliance rules, safety cases) and require LLM/agent workflows to output verifiable proofs (Lean/Coq) alongside code and policies. Stand up a proof ops pipeline (spec authoring → autoformalization → proof checking → CI/CD gate) and budget for the multi-agent compute footprint. Measure impact with defect leakage, audit cycle time, and incident severity—and treat proofs as living artifacts that evolve with your systems.

Larry Ellison Briefly Tops Rich List as Oracle Soars on AI Cloud Deals
Oracle’s stock jumped 40%+ on a bullish AI–cloud outlook, briefly lifting Larry Ellison to $393B in wealth—above Elon Musk’s $385B—before Musk reclaimed the lead by day’s end. Oracle projects +77% cloud revenue to $18B this year and disclosed four multibillion-dollar AI data-center contracts, while Tesla shares have sagged amid EV-policy rollbacks and political backlash; Ellison’s orbit also includes the Stargate AI-infrastructure project and interest in TikTok. Source: BBC News (10 Sep 2025)
💡AI demand is re-pricing cloud and colo capacity—lock in GPU/compute reservations, egress terms, and uptime SLAs now, diversify across providers (including OCI where it fits), and use today’s seller’s market to negotiate sovereign controls, cost guards, and exit ramps before capacity tightens further.

AlterEgo Spins Out of MIT: A Silent-Speech Wearable for “Second-Self” Computing
AlterEgo is a non-invasive, wearable peripheral neural interface that lets people converse with AIs and services without speaking or moving, by decoding internally articulated words and returning audio via bone conduction. The system captures signals from the internal speech articulators, creating a closed-loop interaction that users experience as “talking to oneself.” A primary goal is assisting people with speech disorders (e.g., ALS, MS), while longer-term ambitions point to seamless human–computer integration. The project originated at the MIT Media Lab (2018) and spun out as a company in early 2025, with prior research presented at TED2019 and in peer-reviewed venues. Source: MIT Media Lab — AlterEgo project page
💡 Plan for no-voice, no-gesture interfaces that move sensitive intent data off keyboards and microphones and into neural/peripheral signals. Treat this as health-adjacent: require explicit consent flows, on-device processing where possible, data minimization, and kill-switch/lockout features; map policies to HIPAA-like protections even if not legally required. Pilot in accessibility workflows first (assistive comms, field operations), and measure accuracy, latency, false activations, and user fatigue before scaling.

Sponsored by World AI X |
Celebrating the CAIO Program July 2025 Cohort! |
We’re excited to officially invite you to the Chief AI Officer (CAIO) Graduation Ceremony – July 2025 Cohort. |
If you care about AI leadership and want to see where the next wave of industry shifts is coming from, this is a ceremony worth joining.
Date: Thursday, 18 September 2025
Time: 6:00 PM Eastern Europe Time (11:00 AM ET)
Where: Zoom https://us06web.zoom.us/j/87174674508
Yvonne D. (Senior Manager, Ontario Public Service Leadership, Canada) |
If you’d like to be a part of the CAIO Program, now’s the best time to contact us: |
About The AI Citizen Hub - by World AI X
This isn’t just another AI newsletter; it’s an evolving journey into the future. When you subscribe, you're not simply receiving the best weekly dose of AI and tech news, trends, and breakthroughs—you're stepping into a living, breathing entity that grows with every edition. Each week, The AI Citizen evolves, pushing the boundaries of what a newsletter can be, with the ultimate goal of becoming an AI Citizen itself in our visionary World AI Nation.
By subscribing, you’re not just staying informed—you’re joining a movement. Leaders from all sectors are coming together to secure their place in the future. This is your chance to be part of that future, where the next era of leadership and innovation is being shaped.
Join us, and don’t just watch the future unfold—help create it.
For advertising inquiries, feedback, or suggestions, please reach out to us at [email protected].
Reply