- The AI Citizen
- Posts
- Top AI & Tech News (Through December 21st)
Top AI & Tech News (Through December 21st)
Genesis Mission 🔬 | Bionic Hands 🦾 | Hark Launches 🚀

Hello AI Citizens,
The Genesis Mission is a rare opening: a federal signal that AI for science, energy, manufacturing, and security is not theory—it’s the plan. “Architecture-agnostic” is the tell. It invites multi-cloud and HPC paths, not vendor lock-in. With two RFIs on the clock, the question for CAIOs isn’t “is this relevant?” but “what would it take for us to contribute real results?”
If you’re serious, strip it back. Pick one or two problems you can measurably improve. Build small, reproducible pipelines with clear KPIs (throughput, accuracy, cost per discovery, time-to-insight). Pair early with a National Lab PI. Pre-clear IP and export controls. Put lightweight governance in place—risk, safety, red-teaming—and commit to sharing methods and artifacts, not just headlines. This is less about writing grand strategies and more about showing working systems that others can verify and scale.
Here are the key headlines shaping the AI & tech landscape:
DOE Inks 24 Collaboration Deals to Advance the Genesis Mission
Creators Coalition on AI Launches to Set Industry Guardrails
Figure CEO Brett Adcock Launches $100M Embodied AI Lab, “Hark”
AI Co-Pilot for Bionic Hands Lifts Success to 80–90% in Lab Tests
Science Corp Unveils Portable Organ-Perfusion Division to Extend Organ Life
OpenAI Opens App Submissions and In-Chat App Directory for ChatGPT
Let’s recap!

DOE Inks 24 Collaboration Deals to Advance the Genesis Mission
The U.S. Department of Energy signed MOUs with 24 organizations to help build the Genesis Mission— a national AI platform aimed at accelerating discovery science, strengthening national security, and driving energy innovation. Participants span hyperscalers, chipmakers, and top labs—including AWS, Google, Microsoft, NVIDIA, OpenAI, IBM, Anthropic, Oracle, Intel, HPE, Dell, Cerebras, CoreWeave, Palantir, and Project Prometheus— with commitments framed as architecture-agnostic to support multi-cloud/HPC deployment. A White House meeting with DOE leadership and OSTP kicked off the public-private effort, aligning with the Administration’s AI Action Plan.
DOE also opened two RFIs tied to the program: “Partnerships for Transformational AI Models” (due Jan. 14, 2026) and “Transformational AI Capabilities for National Security” (due Jan. 23, 2026). The Department signaled more convenings to expand the collaborator list across industry, academia, and philanthropy. Source: Energy.gov (Dec 18, 2025).
💡Map your proposal to the Genesis domains—materials, energy systems, advanced manufacturing, and national security—and define two to three measurable scientific KPIs for each workstream. Submit to the active RFIs ahead of the deadlines, line up National Lab co-PIs, and draft pilot MOUs that specify data, compute, and evaluation. Stand up program governance for risk, safety, and red-teaming, and commit to a results cadence—quarterly demos plus publishable artifacts—to qualify for follow-on funding.

Creators Coalition on AI Launches to Set Industry Guardrails
An agnostic, cross-industry group—the Creators Coalition on AI (CCAI)—launched to coordinate standards for responsible AI in entertainment without rejecting the technology itself. The coalition will convene an AI Advisory Committee to define shared definitions, protections, and best practices built around four pillars: transparency/consent/compensation for training data; job protection and transition planning; guardrails against misuse and deepfakes; and safeguarding the human role in creative processes. Founders and signatories span unions (DGA, SAG-AFTRA, WGA, PGA, IATSE), studio executives, technologists, and hundreds of prominent artists, all pledging support as individuals. The group invites broader participation across industries to co-create accountability systems and provenance standards. Source: Creators Coalition on AI (announcement).
💡 Stand up a creator-safe AI policy that operationalizes “Consent, Controls, Compensation, and Transparency” for any data used in model training; implement provenance (e.g., C2PA-style) and deepfake safeguards across ingest, generation, and distribution; and create job transition plans with retraining budgets and role redesign. Build a data licensing registry (opt-in/opt-out, rate cards, audit trails), publish an AI-use registry for productions, and require model cards + red-team reports from vendors. Align legal templates to mandate consent logs and revenue-share mechanisms for synthetic uses, and run quarterly reviews with creator councils to monitor impacts and update guardrails.

Figure CEO Brett Adcock Launches $100M Embodied AI Lab, “Hark”
Figure AI’s founder-CEO Brett Adcock has created a new lab, Hark, funded with $100 million of his own capital to pursue “human-centric” AI systems that think proactively and recursively. An internal memo cited by The Information says Hark’s compute cluster is already live, with Adcock running both Hark and Figure in parallel. The move hints at monetizing embodied-robot data beyond Figure’s platform, even as Figure pushes its Helix vision-language-action model, amasses real-world data, and targets 100,000 humanoids shipped by 2029 following 2025’s $1B+ raise and $39B valuation. It’s still unclear when Figure will begin sales; Adcock has said full autonomy is a prerequisite. Source: The Information; Mike Kalil
💡 Treat Hark as a new upstream model vendor for embodied workflows: draft data-sharing terms for robot logs, teleop traces, and simulation assets with provenance and revenue-share. Stand up a dual-track pilot—Figure for hardware tasks, Hark for “brains”—with safety gates (domain randomization, red-teaming on failure modes) and measure cost-per-task, MTBF, and autonomy-without-intervention. Prepare IP/export-control reviews and create a “sovereign deployment” option (on-prem or VPC) before negotiating access to Hark/Helix weights or APIs.

AI Co-Pilot for Bionic Hands Lifts Success to 80–90% in Lab Tests
Researchers at the University of Utah built an “AI co-pilot” for prosthetic hands that uses silicone-wrapped pressure and proximity sensors on each fingertip plus a per-finger controller to create natural, shared-control grasping. Instead of users micromanaging grip force and posture, the hand detects approach and slip, then quietly adjusts each finger while keeping the human in charge—boosting success on fragile tasks (e.g., cups, eggs) from ~10–20% to 80–90% and reducing cognitive load in trials with amputee and intact participants.
The team frames this as assistance—not autonomy—addressing why up to 50% of upper-limb users abandon advanced hands: control burden and limited feedback loops. Next steps include moving from controlled labs to home environments and upgrading the human–machine interface beyond noisy skin EMG to internal EMG or neural implants; the goal is a commercial device combining tactile sensing, shared AI control, and neural interfaces through industry partners. Source: Ars Technica
💡 Stand up a shared-control prosthetics pilot with a rehab partner and device OEM; pre-spec data capture/provenance (tactile, EMG, kinematics) and safety cases. Track KPIs like task success rate, time-on-task, interventions per minute, and NASA-TLX workload, alongside adverse-event logs. Require on-device inference options (edge compute), clinician override, and reproducible tuning pipelines; begin payer discussions early (coding, coverage, outcomes).

Science Corp Unveils Portable Organ-Perfusion Division to Extend Organ Life
Science Corporation—the BCI startup founded by former Neuralink president Max Hodak—launched a new organ-preservation effort to shrink and automate perfusion systems that keep organs and some patients alive when heart/lung function fails. The prototype adds integrated sensors (oxygenation, flow, pressure, temperature) and closed-loop control to cut constant manual tweaks and hospital tethering; a modular design targets kidneys first, with rabbit kidneys maintained ex vivo for up to 48 hours and an internal goal of a month by spring. Hodak positions the work alongside Science’s retinal implant (acquired from Pixium; early patient reading regained) and “biohybrid” neural interface, framing both as longevity tech. The company is aiming below today’s price points (e.g., TransMedics’ ~$250k device plus $40k–$80k per use) and envisions logistics flexible enough to move organs long distances—and possibly sustain select patients beyond ICU constraints. Source: WIRED
💡 Scope a hospital pilot with transplant and critical-care units: define target indications (kidney ex vivo, bridge-to-transplant support), data capture, and safety cases. Track KPIs like viable preservation time, graft function (eGFR at day 7/90), adverse events, staffing hours saved, and cost per organ/hour. Require closed-loop audit logs, remote monitoring, and fail-safe modes; plan payer/HTA pathways early (DRG fit, total-episode economics). Line up IRB and biobanking protocols, negotiate IP/data rights with vendors, and design “portable first” workflows (battery, transport, training) to validate real-world readiness.

OpenAI Opens App Submissions and In-Chat App Directory for ChatGPT
OpenAI is now accepting developer submissions for review and publication inside ChatGPT, paired with a new in-app directory reachable from the tools menu or chatgpt.com/apps. Apps—built with the beta Apps SDK—can bring external context and actions directly into conversations (e.g., ordering, slide creation, apartment search), trigger via @-mentions or the tools menu, and may be surfaced contextually based on conversation signals. Submissions are handled in the OpenAI Developer Platform with MCP connectivity details, testing guidance, metadata, and country availability; the first approved apps will roll out gradually in the new year. Early monetization allows links to developers’ sites/native apps for physical-goods transactions, with more options (including digital goods) under exploration. Safety requirements include usage-policy compliance, clear privacy policies, minimal data access, and explicit user consent with one-click disconnect. Source: OpenAI
💡 Pick 1–2 high-intent workflows that start in chat (e.g., RFP drafting → e-signature, expense QA → ERP post) and ship tightly scoped, tool-calling apps using the Apps SDK. Prepare for review: MCP integration, privacy policy, logging, and child-safe content defaults; add directory metadata, countries, and deep-link routes for growth. Define KPIs—attach rate from directory, task completion rate, end-to-end latency, cost per completed action, and NPS—and instrument analytics from day one. For enterprise: implement data minimization, tenant isolation, auditable action logs, and least-privilege credentials; run red-team tests and Evals on safety and reliability. Plan monetization paths (lead-out to web/native now; prototype subscriptions/digital goods later), and stand up a feedback loop to iterate on prompts, UI, and tool schemas as usage patterns emerge.

Congratulations to our September Cohort of the Chief AI Officer Program!
Sponsored by World AI X Manju Mude (Cybersecurity Trust & Risk Executive, Independent Consultant, USA) Ommer Shamreez (Customer Success Manager, EE, United Kingdom) Lubna Elmasri (Marketing and Communication Director, Riyadh Exhibitions Company, Riyadh, Saudi Arabia.) Bahaa Abou Ghoush (Founder & CEO, Yalla Development Services, UAE) Nicole Oladuji (Chief AI Officer, Praktikertjänst, Sweden) Thomas Grow (Principal Consultant, Digital Innovation, MindSlate, USA) Samer Yamak (Senior Director - Consulting Services) Nadin Allahham (Chief Specialist - Strategic Planning Government of Dubai Media Office) Craig Sexton (CyberSecurity Architect, SouthState Bank, USA) Ahmad El Chami (Chief Architect, Huawei Technologies, Saudi Arabia) Shekhar Kachole (Chief Technology Officer, Independent Consultant, Netherlands) Manasi Modi (Process & Policy Developer - Government of Dubai Media Office) Shameer Sam (Executive Director, Hamad Medical Corporation, Qatar) |
About The AI Citizen Hub - by World AI X
This isn’t just another AI newsletter; it’s an evolving journey into the future. When you subscribe, you're not simply receiving the best weekly dose of AI and tech news, trends, and breakthroughs—you're stepping into a living, breathing entity that grows with every edition. Each week, The AI Citizen evolves, pushing the boundaries of what a newsletter can be, with the ultimate goal of becoming an AI Citizen itself in our visionary World AI Nation.
By subscribing, you’re not just staying informed—you’re joining a movement. Leaders from all sectors are coming together to secure their place in the future. This is your chance to be part of that future, where the next era of leadership and innovation is being shaped.
Join us, and don’t just watch the future unfold—help create it.
For advertising inquiries, feedback, or suggestions, please reach out to us at [email protected].
Reply