
AI.
Everywhere.
Bots, agents.
Talking or directing?
The customer chooses. Clicks away.
No review. No complaint. And the customer vanishes.
The short answer: no, in any case not when AI handles the conversation itself.
Research agency Y.digital and market research firm Markteffect publish the National Voice Monitor every year. The seventh edition (2026, more than 1,000 respondents) shows that around 12% of questions to a chatbot or telephone voice assistant are answered correctly. Poorly functioning bots now top the list of greatest frustrations in customer service. The preference for human contact has not declined in 2026, despite all the progress in AI. The phone, at 77%, is still in the top 3 of favourite contact channels. Email follows (59%), then live chat (37%) and WhatsApp (29%). Chatbots come in last with 3%.
Poorly functioning bots now top the list of greatest frustrations in customer service.
Research agency Newcom tells the story from the other side. The AI Monitor 2026 (March 2026, with 2,504 respondents) shows that 7.2 million Dutch people aged between 18 and 65 use AI in their daily lives. In the workplace, 4.6 million people use it. AI has, in the space of a few years, broken through from something new to something ordinary. Two things are therefore true at the same time. AI is everywhere, and people have become good at it. And at the same time, the Dutch want to speak to a human as soon as something matters.
AI is moving too fast, with too little control upfront.
Three kinds of danger.
The first is legal. Lawyer Arnoud Engelfriet wrote on his blog Ius Mentis in April 2026 that a chatbot which makes things up (in technical jargon “hallucinates”) amounts to misleading customers. A chatbot that says you have no telephone customer service when you actually do? Fine. The underlying rule is simple. As a business, you are yourself responsible for what your systems say, whether that is a human or a bot.
The second is privacy. The Dutch Data Protection Authority (Autoriteit Persoonsgegevens), the Netherlands’ privacy supervisor, again warned in January 2026 against sharing personal data through AI chatbots. The reason: several data breaches, including at a GP service and a telecoms company. In March 2026, the sixth edition of the Algorithm and AI Risk Report appeared, in which the regulator set its risk indicator to red. The message: things are moving too fast, with too little control upfront.
The third is compliance. From 2 August 2026, new rules apply under the European AI Act. Customers must know they are talking to an AI system and not a human. Under article 99 of the AI Act, fines for so-called high-risk systems run up to 15 million euros or 3% of global annual turnover, whichever is higher. For prohibited AI practices, the maximum reaches 35 million or 7%. For SMEs, in practice, that means: a single misstep can finish off the business.
An American cloud means that the agreements with your customers in effect fall under American law.
Yes, under conditions.
The Dutch Data Protection Authority published a comprehensive vision document in February 2026 titled “Moving Forward Responsibly” (Verantwoord vooruit). Three demands stand out. You must map the risks upfront. You must keep control over your own data. And your staff must understand how the system works. The place where your data is stored matters. The moment customer data is processed or stored outside Europe, the legal risks rise quickly. An American cloud means that the agreements with your customers in effect fall under American law. Many businesses only realise this when it is already too late.
AI at the front of the conversation is rejected.
The Voice Monitor shows a clear pattern. People are negative about AI in places where nuance counts. AI that asks why you are calling and then transfers you to a member of staff dropped from 59% positive in 2025 to 52% in 2026. AI that recognises you before a member of staff speaks to you dropped from 55% to 49%. On one point the appreciation stays high. AI that produces a summary for the staff member after the conversation scores 40% positive, 38% neutral and 22% negative.
The message is clear. AI in the background, in service of the staff member and the record, is accepted. AI at the front of the conversation is rejected. That distinction is no coincidence. Sometimes no answer is also an answer. A human can stay silent, ring back, let a day settle, ask follow-up questions. A human recognises when it stays quiet on the other end. AI fills the silence with an answer, and the wrong answer is more expensive than no answer.
That human has the overview, because everything comes in at one place.
Behind the scenes, not in the conversation.
RippleCom is the all-in-one around the business phone number. Calls, SMS and WhatsApp come together on one number, recorded and findable. AI does there exactly what the Voice Monitor shows that customers accept. Sorting conversations by topic. Writing summaries for the staff member. Suggesting plans. Ordering files. The conversation itself is for a human. The customer speaks with someone who can look back over the entire history on that phone number. That human has the overview, because everything comes in at one place.
That solves three problems at once. It is GDPR-compliant, because data sits on European servers and the communication runs through Meta’s official business WhatsApp. There is no fuss about who said what on behalf of the company, because a human replies and everything is recorded properly. And there is no risk of a chatbot promising something in your name that cannot be done, with a lawsuit afterwards.
The figures match that choice. Orthodontic practice OrthoEuregio reduced no-shows by 75%. WhatsApp messages are read by an average of 98% of recipients. The municipality of Borne reached 20% more residents in a sustainability project. None of these results came about through a bot pretending to be a human. They came about because a human, through the right channel, asked the right question at the right moment.
AI belongs in the background. Not at the table.
RippleCom uses AI exactly where it works: classifying, summarising, structuring, recording, making traceable.
Conversation stays with the human. One phone number.
Calls, SMS, WhatsApp. One place.
So it lands.
Humans answer.
Always.
Manageable.
Want to know how much are you losing each month? ➡️ Communication Scan