When theCUBE Research Asked: 'What's the Real ROI of Agentic AI?'
    podcast feature
    Oct 13, 20256 min read read

    When theCUBE Research Asked: 'What's the Real ROI of Agentic AI?'

    Paul ChadaPaul Chada(Founder & CEO)

    Last month, I sat down with Scott Hebner from theCUBE Research for their 'Next Frontiers of AI' podcast. The premise: enterprise AI is at a turning point, and nobody's asking the hard questions about what's actually working in production.

    Scott and his team surveyed 625 pre-qualified business and tech AI professionals across 13 industries, asking 61 questions about their AI maturity, investment plans, and real-world results. The findings landed right where we've been living at DoozerAI: strong investment momentum, growing ambition, and a massive trust gap that's limiting ROI to isolated pockets of automation.

    “A lot of executives are saying they're deploying agents because the board is asking, 'What are you doing with AI?' But the real work—the part that drives ROI—is getting the agents to be reliable and accepted by the people who use them.”
    Paul Chada, on the podcast

    The research revealed something we see every day: 61% of companies are either deploying or planning to deploy AI agents within the next 18 months. That's not a pilot number—that's a market shift. But here's the catch: investment is outpacing application, and both are outpacing trust.

    3.7/5
    Investment Maturity
    3.1/5
    Use Case Maturity
    2.4/5
    Trust Score

    Let's break that down. Companies are spending aggressively on AI reasoning and decision intelligence (investment score: 3.7 out of 5). They're deploying use cases across knowledge work, from analysis and reporting to process optimization (use case maturity: 3.1). But trust? It's sitting at 2.4—the lowest score across all measured dimensions.

    Only 49% of enterprise leaders are highly confident that AI agents can make accurate, trustworthy decisions. That number drops to 47% for explainability and 44% for autonomous action. Translation: people trust AI to execute tasks, but not yet to make decisions.

    As I told Scott: 'If the agent isn't dependable, it becomes a shiny object that's quickly cast aside.' We've seen it happen. Companies deploy agents, get excited about the potential, then abandon them when trust doesn't materialize. The agents sit unused while humans go back to doing things the old way.

    The research identified a critical shift happening right now: from automation to decision intelligence. Scott called it the move from 'doing things faster' to 'making better decisions.' Seventy-three percent of companies are planning strategic investments in AI reasoning over the next 18 months. Not content generation. Not basic automation. Reasoning.

    “Every company is now asking how to build reasoning into their workflows. It's not about automating the easy stuff anymore; it's about giving your teams judgment and decision support at scale.”
    Paul Chada

    This parallels what we built the 1,000 interns framework around: the idea that AI capacity isn't just about speed—it's about judgment. What would you do if you had 1,000 smart, tireless digital workers who could help you make better decisions, not just execute faster tasks?

    Here's where the conversation got interesting. Scott framed generative AI and LLMs as the 'gateway to AI'—essentially the browser of the AI era. Just like browsers were the starting point for the internet but not the end game, GenAI is the foundation for something bigger: agentic AI with actual reasoning capabilities.

    The research showed that 62% of companies now see AI agents as a key part of decision-making, not just automation. That's the frontier moving upward—from procedural tasks to knowledge work that requires domain-specific judgment.

    I shared our field experience: 'There's a big difference between using AI to do work faster and using it to help people think better. The best deployments we're seeing are the ones where agents help humans see what they might have missed—and then make the right call with confidence.'

    The trust gap isn't just technical—it's cultural. Deploying agents isn't like installing software. It's more like hiring employees. Your team needs to believe their digital counterparts will behave consistently, explain their reasoning, and improve over time. That's why we run agents in parallel with human teams until confidence in decision quality is established.

    “Organizations are just wanting to run it in parallel right now. As they see it make the correct decision repeatedly, trust builds naturally, and that's when autonomy becomes acceptable.”
    Paul Chada

    Scott summed it up perfectly: 'Trust is emerging as the currency of innovation. No trust, no ROI.' And he's right. You can have the best technology, the biggest budget, and the most ambitious roadmap. But if your people don't trust the agents to make sound decisions, agentic AI will remain a parallel experiment rather than a full participant in the workforce.

    The research made it clear: the next two years will define the winners—those who build trusted AI ecosystems that scale decision intelligence—and the laggards who remain trapped in experimentation. And if you fall too far behind, you may never catch up.

    Watch the full conversation on YouTube to hear the complete discussion about where agentic AI is headed and what it takes to realize actual ROI in production environments. The research is comprehensive, the insights are specific, and the implications are immediate.

    Watch the full theCUBE Research discussion on YouTube
    Research
    Decision Intelligence
    Trust
    Industry Analysis