• ,

    User experience in the age of agentic delegation

    User experience in the age of agentic delegation
    Source: TechCabal

    Share

    Share

    Around 70% of new roles emerging across the global job market now expect some exposure to agentic systems. Not “AI awareness”, not prompt writing, actual experience delegating work to autonomous agents and knowing when not to. 

    The landscape of user experience is undergoing its most dramatic transformation since the internet. We’re shifting from a world where humans click buttons to one where AI agents act on our behalf, making decisions, executing tasks, and orchestrating workflows autonomously. This isn’t just another interface update, it’s a fundamental reimagining of how humans and technology interact.

    From direct manipulation to intelligent delegation

    Traditional UX design was built on a simple premise: create intuitive interfaces that guide users through predetermined pathways. Every button, every menu, every screen transition was carefully crafted to help humans accomplish specific tasks. The user was always in control, making every decision.

    Agentic experience (AX) design flips this model entirely. Instead of navigating interfaces, users now express intent like “find me a suitable gift for my colleague under $50” and AI agents handle the execution. They search, compare options, make recommendations, and can even complete transactions while keeping users informed and in control.

    Forbes research shows that 99% of enterprise developers are either exploring or actively developing AI agents, and projections suggest one billion AI agents will be operational by 2026, this isn’t distant speculation; it’s happening now.

    The core principles of agentic design

    Intent over interface

    There is a fundamental shift from designing screens to designing for outcomes. Rather than asking how do we make the feature discoverable, designers must now ask how do we ensure our agents understand what users truly want and when to act.

    This requires moving beyond traditional user journey mapping to intent-system mapping. Designers must understand the capabilities of each agent, how they coordinate, and how user goals are translated into agent actions.

    Transparency through context

    Agentic systems must balance two seemingly contradictory needs: operating invisibly in the background while maintaining complete transparency. The best agents work seamlessly behind the scenes but provide clear visibility into their actions, reasoning, and decision-making processes.

    Users need to understand not just what agents do, but why they do it. For example, when an AI tool recommends a product or takes an action, it must explain its reasoning in human terms, communicate confidence levels, and acknowledge limitations.

    Proactive problem-solving

    Unlike traditional systems that wait for commands, agentic AI takes initiative. Intelligent agents resolve issues before they escalate, automatically processing refunds for delayed shipments, rescheduling cancelled appointments, or flagging potential problems before they impact users.

    Research by CMSwire predicts that by 2029,  agentic AI will autonomously resolve 80% of common customer service interactions without human intervention. This proactive capability fundamentally changes the relationship between users and technology.

    The critical role of evaluation

    Here’s where many organizations stumble spectacularly, without proper evaluation frameworks, agentic delegation becomes reckless automation.

    IBM learned this lesson the hard way in 2023 when the tech giant replaced approximately 8,000 HR workers with an AI assistant called AskHR, designed to handle everything from holiday requests to payroll queries. Initially, the system seemed successful, handling millions of interactions with 94% being routine queries, password resets, policy lookups, and basic inquiries, resolved efficiently by AI.

    However, the remaining 6% of queries exposed catastrophic failures, these involved sensitive workplace issues, ethical dilemmas, and emotionally charged conversations requiring empathy, nuance, and subjective judgment. When employees faced harassment complaints, mental health crisis, or complex benefits disputes, the AI floundered. This seemingly small 6% caused massive service disruptions, damaged employee morale, and created dangerous resolution delays. IBM was forced to rehire staff to handle these complex, human-centric scenarios learning that the most critical interactions are often the least frequent ones.

    Swedish fintech company Klarna made a similar mistake. Klarna bragged that AI could do the work of 700 employees and slashed 22% of its workforce in 2024, the company began rehiring humans by mid-2025 when customer satisfaction plummeted. CEO Sebastian Siemiatkowski admitted, “From a brand perspective, it’s so critical that you are clear to your customer that there will always be a human if you want.”

    The chronicle journal research shows that  over 55% of organizations that executed AI-driven layoffs now regret the decision. The common thread is their failure to properly evaluate what their agents could actually do versus what they claimed they could do.

    Effective evaluation requires:

    • Continuous monitoring of agent performance across diverse scenarios
    • Clear metrics beyond simple task completion rates
    • Human oversight for complex, sensitive, or high-stakes decisions
    • Feedback loops that capture when agents fail and why
    • Gradual implementation with pilot testing before full deployment

    Building trust through control

    For agentic delegation to succeed, users must maintain ultimate control. This means designing clear mechanisms for overriding decisions, adjusting preferences, and opting out of automation entirely when needed.

    The best agentic designs offer flexible control that scales with user comfort. Early interactions might require frequent confirmation and explanation, while established relationships can support more autonomous action, similar to how trust develops in human relationships.

    Agents must also demonstrate emotional intelligence, especially in sensitive domains. In the healthcare sector, healthcare agents should communicate not just diagnose with clear evidence and zero uncertainty behind conclusions. In the finance sector, financial agents should be able to explain their reasoning when making investment recommendations.

    The path forward

    The age of agentic delegation is here, bringing both unprecedented opportunities and significant risks. Success requires more than implementing powerful AI, it demands thoughtful design that maintains human agency, establishes trust, and includes rigorous evaluation frameworks.

    Organizations that rush to replace humans with AI without proper evaluation will join IBM and Klarna in the cautionary tale category. But those that thoughtfully design agentic experiences, balancing automation with oversight, delegation with control, and efficiency with empathy will define the future of human-computer interaction.

    Key takeaways

    Intent-driven design replaces interface-centric thinking as AI agents interpret and act on user goals rather than explicit commands

    Transparency and explainability are non-negotiable, users must understand what agents do, why they do it, and how confident the system is in its decisions

    Rigorous evaluation prevents costly mistakes, test agents across diverse scenarios and maintain human oversight for complex or sensitive decisions

    Gradual implementation beats wholesale replacement, companies that fired staff in masse for AI are now scrambling to rehire them

    Human control remains essential, even highly autonomous agents need clear mechanisms for user intervention, preference adjustment, and opt-out options

    Emotional intelligence matters, agents handling sensitive interactions must recognize context, demonstrate empathy, and know when to escalate to humans

    Trust builds over time successful agentic systems start with high touch and gradually increase autonomy as they prove reliability and users gain confidence