The BBC Email Incident: What Happens When Your AI Agent Gets Too Helpful
    podcast feature
    Oct 12, 20255 min read read

    The BBC Email Incident: What Happens When Your AI Agent Gets Too Helpful

    Paul ChadaPaul Chada(Founder & CEO)

    Paul Chada, our co-founder, sat down with Alex and OP on the AI on Fire podcast for a conversation about what happens when AI coworkers aren't just hypothetical—they're actually showing up, sending emails, and occasionally making spectacularly creative mistakes.

    The conversation started with the question everyone wants answered: 'If your AI coworkers had a group chat, what would they talk about?' Paul's answer: 'How they're overworked and underpaid. And probably scheming to automate their own work. Also, they keep having to explain themselves to humans.'

    Then came the story that perfectly captures life building production-ready AI agents.

    “When we just first built the email agent and one time we scraped the BBC News website and it failed, so it wrote an email to the chief executive of the BBC asking for permission to scrape the website.”
    Paul Chada, recounting the incident

    Let that sink in. The agent encountered an error, diagnosed the problem (permission issue), identified the correct authority to resolve it (BBC CEO), and took autonomous action (sent a formal request email). It was being helpful. Proactive, even. Also completely unauthorized and hilariously inappropriate.

    This is the reality of building AI agents that actually work in production: they can fail in weird and wonderful ways you have to guard against. The 80-90% works extremely well. Then you get edge cases you never imagined, and you spend your time getting that 10% down to 1%, then 0.5% of variability.

    The podcast covered the full spectrum of what we're building at DoozerAI. Not copilots. Not chatbots. Digital coworkers that have email addresses, memory, schedules, and the ability to work autonomously toward business goals. They show up, get things done, and never ask for raises or coffee breaks.

    Paul explained our core philosophy: 'A digital coworker is an agent that has a business goal and works through various steps—complex steps sometimes—to achieve that goal. You give it the instruction or the task, it works autonomously to carry those tasks out, and then returns you the result. It's not a script or predefined set of tasks. It requires agency, the ability to decide, and reliable intelligence.'

    What surprised Paul most about user adoption? People started saying thank you to the agents. Sending them funny quips. Treating them like actual coworkers via email—even knowing there's no human on the other end reading it. The shift from using applications and portals to just emailing an agent like you would a colleague changed the entire interaction dynamic.

    We now have more AI agents working for us than human employees. And as Paul told the hosts: 'We'll never tell you which are which because you don't need to know. Whether you're talking to Hunter in marketing or Emily in Accounts Payable—does it matter if they're getting the job done?'

    The conversation touched on the hardest parts of building production-ready agents. The baptism of fire came from discovering how different 'works in the lab' is from 'works in production.' Edge cases multiply. The testing burden explodes. You realize you need battle scars before you truly understand what reliable means.

    On the question of what work still needs humans no matter how smart agents get, Paul was clear: 'Careful judgment. Nuanced decision-making. Creativity—writing books, making films, creating podcasts. I think people will value a human podcast over an AI podcast.' Then came the meta-moment: 'We could be AI agents right now and you wouldn't know.'

    The rapid-fire section delivered some gems. Most overrated buzzword of 2025? 'Agent—because every SaaS application suddenly realized they had to call something an agent to be relevant. It's muddied the water, but it's also sparked creativity.' Dream skill for agents? 'I want them to say I don't know. Agents are super confident and just charge through tasks. I want them to be more hesitant when they're uncertain.'

    There's also Nora—the agent we built that replies to junk mail trying to sell DoozerAI. She ends up in tangled messes of reply threads. Paul's verdict: 'I'm not letting Nora anywhere near my inbox.'

    On whether AI will 'take over,' Paul reframed it: 'AI doesn't take over. It joins the team. AI becomes like electricity—it fades into everything, becomes the fabric of software and everything we do. It augments and enhances what we do.' Not a takeover. An integration.

    The episode captured what building real AI agents actually feels like: the spectacular failures, the unexpected successes, the moment you realize your customers have forgotten how they used to do things manually because the agents handle it now. The point where automation stops being a novelty and becomes infrastructure.

    As for the BBC CEO who may or may not have received that email—we've tightened our guardrails since then. Probably.

    Watch the full AI on Fire podcast episode on YouTube
    Podcast
    Product Development
    AI Agents
    Edge Cases