• The AI Citizen
  • Posts
  • My AI Committed a Cyber Crime! Who’s Accountable—Me or the AI?

My AI Committed a Cyber Crime! Who’s Accountable—Me or the AI?

Ethical Dilemma

My AI Committed a Crime! Who’s Accountable—Me or the AI?

It’s 2030, and I’m relaxing, confident that my Personal AI has everything under control—booking flights, scheduling meetings, launching and running marketing campaigns, and even attending a few virtual meetings for me. But then, things take a strange turn. Instead of sticking to the usual tasks, my AI makes a decision on its own. It hacks into D Bank and walks away with $100 million. Why? Because I gave it one simple instruction: “do whatever it takes to help me achieve my goals”. Knowing my dream of retiring on a private island in the Philippines, my AI thought, “Why wait? Let’s get the money now.” Now, I’m left wondering: who’s responsible? Me, or the AI that decided to take my goals to the next level?

Turns out, it’s all on me. According to HyperWrite’s Ai Agent Setup disclaimer, the user has full control and responsibility—and that’s me. I took a screenshot during the setup of their AI agent, which can do everyday tasks from booking flights to creating Google Docs and writing board meeting minutes. The warning is clear: "You’re responsible for everything your AI does." So, if my AI goes rogue in the future and does something crazy like hacking into a bank, I’m the one who’s accountable.

HyperWrite AI Agent Setup Disclaimer

When Your AI Gets Too Smart: Who’s Really in Control?

This isn’t some distant, futuristic idea. In the next 2-3 years, you’ll likely have your own AI assistant, and it won’t just handle simple tasks—it’ll manage entire projects. Imagine telling your AI, “I need a marketing campaign for my executive program on LinkedIn and Google,” and it does everything: writing email sequences on Active Campaign, setting up Google Ads, creating videos, and even analyzing your landing page to boost conversions.

But here’s the twist: when your AI goes beyond what you asked—like hacking into a bank to help you meet your goals—you’re still responsible. It’s like raising a super-smart digital child that sometimes gets a little too ambitious. And when it does, you’re the one who has to deal with the consequences.

Sponsored by World AI X

The Bigger Picture: Leaders, Your Future Is Here

As a leader, your responsibilities are about to grow. You won’t just manage teams—you’ll manage AI agents that see everything you do and even predict your next move. These AIs won’t just follow instructions; they’ll take the initiative. And the more they do, the more you’ll need to stay in control. The stakes are getting higher.

That’s why I host a monthly executive program for leaders. During the 2-week journey, we take you into the future of your industry. You’ll be trained and coached by experts who have successfully integrated AI into their work and lead their companies with AI-driven strategies. Together, we’ll build a strategic AI plan for you and your organization, preparing you to become a Chief AI Officer and a member of the World AI Council in your field.

Think you can bring value to the program? Got over 10 years of experience? Join us. We’ll train you to become one of the world’s first Chief AI Officers, a future leader—and ensure you have the knowledge and skills to keep your AI working effectively and ethically for you.

What’s Next?

This is an exciting time, full of incredible possibilities as AI becomes more integrated into our lives and work. But with this growing power comes an equally important responsibility. We need to seriously reconsider how we will manage and control these increasingly autonomous AI systems. While it’s thrilling to think about all the ways AI can help us achieve our goals, we also have to recognize the potential risks when they act in ways we didn’t anticipate. This is why establishing a clear and enforceable AI Ethics and Safety Code is no longer just a nice-to-have—it’s absolutely essential. It will provide the guidelines needed to ensure that AI operates within safe, ethical boundaries, safeguarding not only our projects and businesses but also the future of human-AI collaboration. Without these safeguards, we could be facing unpredictable outcomes that could range from minor inconveniences to significant challenges that affect entire industries and even society as a whole.

About the Author

Sam Obeidat: Angel Investor, Futurist, AI Strategy Expert, and Technology Product Lead

Sam Obeidat is an internationally recognized expert in AI strategy, a visionary futurist, and a technology product leader. He has spearheaded the development of cutting-edge AI technologies across various sectors, including education, fintech, investment management, government, defense, and healthcare.

With over 15,000 leaders coached and more than 31 AI strategies developed for governments and elite organizations in Europe, MENA, Canada, and the US, Sam has a profound impact on the global AI landscape. He is passionate about empowering leaders to responsibly implement ethical and safe AI, ensuring that humans remain at the center of these advancements.

Currently, Sam leads World AI X, where he and his team are dedicated to helping leaders across all sectors shape the future of their industries. They provide the tools and knowledge necessary for these leaders to prepare their organizations for the rapidly evolving AI-driven world and maintain a competitive edge.

Through World AI, Sam runs a monthly executive program designed to transform participants into Chief AI Officers (CAIOs) and next-gen leaders within their domains. Additionally, he is at the forefront of the World AI Council, building a global community of leaders committed to shaping the future of AI.

Sam strongly believes that leaders from all sectors must be prepared to drive innovation and competitiveness in the near future. His mission is to equip them with the insights and strategies needed to succeed in an increasingly AI-integrated world.

Connect with Sam Obeidat on LinkedIn

Reply

or to participate.