AI Is growing up, and so is customer experience

Last week in Berlin, I joined an AI Breakfast hosted by MUUUH! (Parloa’s longest-standing services partner) during CCW Berlin. About 45 leaders from more than 20 companies, sitting down early in the morning to talk candidly about where AI is actually headed in customer experience. Those are my favorite conversations — the ones where we move past hype and start talking about what’s working and, candidly, what’s kinda challenging.
What stayed with me, more than any single prediction, was how quickly the center of gravity has shifted. A couple of years ago, we were still debating whether conversational AI was viable at scale. Now the discussion is about how to architect systems where AI continuously improves AI, and what that means for companies trying to serve customers in real time.
From rule-based to GenAI-Native
When Parloa first started, conversational systems were largely rule-based. Structured flows, predefined paths. That era taught a lot about design discipline, and a lot of what we learned still applies.
Stefan Ostwald, Parloa’s technical founder and now head of R&D, talked with Ben Ellerman, Managing Director of MUUUH!, about something that I think is easy to miss: we’re no longer just building agents that respond. We’re building systems that evaluate, simulate, and optimize. One agent can analyze the performance of another. Real customer conversations feed back into the system. Even implicit company policies (the unwritten guardrails that live in people’s heads rather than any documentation) can be surfaced and reinforced through AI training loops.
For a long time, AI helped humans make the core decisions. What Stefan’s team is delivering is a model where AI handles more of the primary work, with humans supervising and shaping outcomes.
It’s a quiet shift, but it changes the operating model entirely.
When agents represent both sides
One of the most interesting parts of the discussion centered on what some are calling “customer proxy agents.”
Imagine this: You’re buying a car. Instead of scheduling a test drive yourself, you ask your personal voice AI agent to handle it. It knows your preferences, your calendar, your constraints. It reaches out, compares options, coordinates availability, and comes back with recommendations.
On the other side of that interaction is the company’s agent. Their system is designed to represent the organization’s policies, product knowledge, service standards, and brand voice.
When those two systems meet, the conversation isn’t just automation layered onto a website. It’s two representatives interacting with each other — no humans necessary.
That requires a big mindset shift for companies deploying agents. It’s totally different than deploying your typical bots.
If you think in terms of a bot, you’re usually solving for a narrow problem: deflect tickets, answer FAQs, automate a defined workflow. The bot sits at the edge of the experience.
But when agents begin representing both the customer and the enterprise, you have to think so much bigger. The system isn’t just retrieving information. It’s interpreting intent, applying policy, making decisions within guardrails, and coordinating across multiple systems.
That requires orchestration.
APIs and structured system calls still matter, but real customer conversations rarely start with perfectly structured requests. They begin with ambiguity (“I’m thinking about changing something,” or “What’s my best option here?”)
A conversational agent becomes the connective layer between that ambiguity and the structured systems behind it. It translates intent into action. It reconciles business rules and ensures that what’s said aligns with how the company operates.
Composability is a strategic choice
Another theme that came up during the breakfast was composability. Large enterprises don’t operate in clean environments. They’ve made investments, they have existing vendors they trust, and they have internal standards and governance requirements that don’t disappear because a new platform came along.
As a result, we’re seeing real demand for modularity, or the ability to bring in conversational orchestration, analytics, and integration layers without dismantling what’s already working.
That’s where professional services partners like MUUUH! come in. They know how to translate platform capability into what an enterprise actually needs to deploy, and they’ve built accelerators and integrations that help customers move faster without taking on unnecessary risk. When the technology is still evolving as fast as it is, that kind of partner is genuinely essential.
MUUUH! is Parloa’s longest-standing professional services partner, and in the last two years we recognized them as our Top implementation partner during our annual WAVE Soundcheck partner conference. Using our AI Agent Management Platform (AMP), MUUUH! takes a genuinely holistic approach to voice AI agents where strategy, use case design, integration, and optimization are all connected. They’ve delivered hundreds of projects across industries like energy, retail, telecom, finance, and logistics, working with brands such as HSE (where they helped automate over 3 million customer calls annually), SwissLife, OBI, and Deutsche Glasfaser to translate ambition into scalable systems. Our customers have achieved automation rates as high as 90% with measurable improvements in efficiency and customer satisfaction.
The cultural shift: Germany and the U.S.
As Parloa has grown from a small Berlin team to a global organization of more than 400 people, we’ve had a front-row seat to different approaches to innovation.
German companies tend to bring a strong bias toward precision. Decisions are carefully evaluated, systems are designed with rigor, and there’s a deep awareness of risk. That all makes sense when you’re operating in highly regulated environments or running mission-critical infrastructure.
In the United States, companies often lean toward speed. Run a pilot. Learn quickly. Iterate. Let the results guide the next move.
Both mindsets are valuable.
AI isn’t a playground technology anymore. It’s increasingly being deployed in environments where mistakes have real consequences, including global customer operations, regulated industries, complex service workflows.
It’s easy to launch an AI assistant for something simple. Joe’s Pizza can probably figure that out in an afternoon. But many of the companies we work with operate at a completely different level of complexity. They serve customers across multiple regions, integrate with large technology ecosystems, and manage interactions where accuracy and compliance really matter.
In those settings, the precision we were built on becomes a strength. Combined with the U.S. side’s tendency toward efficient innovation, we’re building AI systems that can move quickly, while still meeting the standards enterprises require.
Every interface becomes conversational
Pull back far enough and the direction becomes clearer. Chat, voice, web, and mobile are channels, but customers don’t experience them as separate systems. They experience a need.
They want to change a flight, resolve a billing issue, or compare product options. The channel is simply the doorway they happen to walk through to get their need met.
What’s evolving is the interaction layer itself.
When conversational AI is designed multimodally, it can surface structured data, trigger transactions, present visual elements, and adapt in real time, all within a single, continuous interaction. A conversation might filter products on a website, retrieve account history, and initiate a workflow without the customer consciously switching contexts.
The experience feels cohesive because the conversation becomes the throughline.
Over time, that makes traditional channel boundaries less relevant. The customer doesn’t have to think about whether they’re “in chat” or “on the web.” They’re just interacting.
I left the breakfast thinking less about any specific technology and more about the underlying architecture of the technology we’re building. It’s pretty clear that building well in this next period will require systems that can learn continuously, represent brands faithfully, and handle a world where AI is increasingly talking to AI.
That’s the work ahead of us, and the people who were in that breakfast room are already doing it.
Book a demo:format(webp))
:format(webp))
:format(webp))
:format(webp))