AI washing is real. So is the shift. Let's be honest about both.
As co-CEO of Boldare, I navigate two distinct conversations about AI regularly. One happens with clients trying to understand what AI means for their business. The other is internal: within a company that builds digital products and has been incorporating AI into its actual operations for the past two years. That dual vantage point is why I’ve been closely watching a pattern that’s distorting both conversations simultaneously.
That pattern is called AI washing. And despite the coverage it’s received, I think something important is still missing from the discussion.

Table of contents
What AI washing actually is – and isn’t
AI washing rarely looks like a blatant lie. More often, it’s a matter of emphasis. A company deploys a chatbot and characterizes its entire customer strategy as “AI-powered.” A cost-cutting initiative gets relabeled as an “AI transformation.” An organization speaks confidently about autonomous systems while human employees quietly manage quality control and risk behind the scenes.
You don’t need to say anything technically false to engage in washing. You just need to spotlight the most favorable slice of reality and omit the rest.
The data supports this. Research suggests that roughly 40% of European AI startups in 2019 used virtually no AI whatsoever. A study from RWTH Aachen found that 78% of organizations report their AI purchases fall short of promised capabilities. Regulators are beginning to respond: in 2024, the SEC fined two investment advisory firms – Delphia and Global Predictions – $400,000 for misrepresenting their AI capabilities. These were the first penalties of their kind, and almost certainly not the last.
The form of AI washing I find most troubling, however, is when “AI” is invoked as a moral cover for decisions that are fundamentally human. AI doesn’t lay people off. People do – boards, owners, executives. SAP’s announcement of 10,000 job cuts in 2025, framed as a pivot to AI and cloud, illustrates the point: observers have noted that the pace of layoffs is outrunning the company’s actual AI deployment. Technology may be reshaping the nature of work – and increasingly it is – but it isn’t some neutral external force acting upon companies. Using it as a justification rather than a context obscures accountability in a way we should all challenge.
What I see from the inside
I also need to say this clearly: the shift is real, and it’s already happening.
At Boldare, our teams’ ways of working have changed considerably over the past two years. Across nearly every team, AI agents now function as active contributors in daily operations – handling tasks that once consumed a full-time employee’s time. Drafting, research support, code review, documentation, preliminary analysis. Real tools, embedded in real workflows, with measurable results.
Because of this, we will likely bring on fewer new employees going forward than we otherwise would have. I think that deserves to be said directly, without softening.
But that’s not the complete picture. New types of work are emerging. New skills are gaining value. New roles are being built around capabilities that simply didn’t exist three years ago. The honest answer is that this is complicated – and complexity calls for precise language, not a polished press release.
We also develop tools for clients that allow them to operate with leaner, more focused teams. So through the products we create, some positions will shift or disappear. I’m not comfortable portraying this as purely good news, and I’m equally uncomfortable calling it a catastrophe. Both framings sidestep the harder work of genuine thinking.
What honest AI adoption looks like in practice
When we began integrating AI into our own teams, we didn’t lead with talk of transformation. We started with a question: where are people spending time on work that doesn’t actually require their judgment? That was our starting point – not because it was exciting, but because it was measurable. We could compare before and after. We could identify where a tool added value and where a human still needed to intervene.
That specificity is what I look for now when business leaders approach us for guidance.
Not “we want to be AI-driven.” The more useful question is: what specific decision or workflow do you want to improve, and how will you know whether it worked? That question distinguishes implementations that build value over time from those that stall after the initial pilot. The Klarna example is instructive: their AI assistant produced genuine savings, but when customer satisfaction fell, they ended up rehiring much of the workforce they’d reduced. The narrative ran ahead of the technology, and the market responded accordingly.
In practice, moving forward thoughtfully requires three things.
Be specific about where AI is actually being used. Not “our platform is AI-enabled” – but which component, doing what, replacing or supporting which process. Teams that understand exactly what the technology is doing are better equipped to use it effectively, identify failures early, and scale responsibly.
Be transparent about the human layer that remains. In most implementations we build and use internally, there’s still a person reviewing outputs, managing exceptions, and making judgment calls. That’s not a shortcoming of the AI. It’s sound system design. The organizations generating real value today are those who’ve identified the right handoff points between humans and AI – not those who’ve tried to remove people from the equation entirely.
Measure what genuinely changes. Not sentiment or adoption figures – but the actual outcome you set out to improve. Faster turnaround? Fewer revision cycles? A smaller team needed to run a specific function? Choose one metric and track it. That evidence builds internal trust and justifies the next investment.
Where I land
I’m genuinely skeptical of AI washing – not as a performance, but because I’ve witnessed what inflated claims do to trust, both internally and with clients. Once that trust erodes, it becomes much harder to build the organizational appetite for the real work ahead.
But I’m equally unwilling to dismiss the transformation that’s underway. The labor market is shifting. The skills that matter are shifting. The tools woven into daily work are shifting. This is happening regardless of press releases and earnings calls.
The leaders I find most credible are the ones who can tell you precisely where AI works in their organization and where it doesn’t – without hedging in either direction. That’s the standard I hold myself to, and it’s the standard worth holding each other to as well.
Share this article:


