Home Blog Prototyping From idea to success - testing strategies in customer-centric product development

From idea to success - testing strategies in customer-centric product development

We’ve all had a eureka moment, an idea that – in the moment, at least – seems like it could be a truly great product. But the reality is, not all ideas are worth implementing. In fact, it’s very easy to create a product that nobody wants. How can you avoid this particular disaster scenario? At our recent event, “Finding Product-Market Fit: Berlin,” Boldare’s product designer, Kateryna Kaida, provided the answer to this exact question. The key lies in smart experimentation.

From idea to success - testing strategies in customer-centric product development

Table of contents

First, a reality check

However awesome that initial product idea may be, let’s remind ourselves of some hard facts:

  • 90% of startups fail (Source: Startup Genome)
  • 35% of startup failures occur due to a lack of market need (Source: CB Insights)
  • 80% of features in the average software product are rarely or never used (Source: Pendo)

Faced with these statistics, it’s tempting to give up before we even begin! But why does this happen? It’s because we make assumptions.

Product assumptions

When we have a great idea and consider building a digital product, we tend to make assumptions. We assume that there is a need for the product, that we will be able to build it successfully, and that people will want and use it.

While it’s normal to make assumptions, it’s important to acknowledge that assumptions carry risk.

There is a possibility that our assumptions may not be true, resulting in the development of an unsuccessful product.

For new products, risks tend to fall into one of four categories:

  • Desirability - does the market really want it?
  • Feasibility - can we build it?
  • Viability - should we build it, i.e. can we build a business around it?
  • Usability - can users figure out how to use it?

Mitigating these risks is done by carrying out experiments to test our assumptions.

An example assumption:

People will be willing to publicly post pictures of their private spaces, such as bedrooms and bathrooms, inviting complete strangers to sleep in their homes.

This is one of the initial assumptions that lay behind the Airbnb platform. The main risk here is in the ‘desirability’ category: will people (i.e. the market) really want to do this?

Having set up the initial website in 2007, Airbnb’s founders had an initial success on their hands; the next step was to achieve product-market fit. What they noticed was that most users (or ‘hosts’) were posting poor quality images of their accommodation – not very enticing.

They assumed that hosts with professional photos of their places would get more bookings. They then proceeded to test that hypothesis with an experiment. They contacted a number of hosts, offering to take professional-standard photos (for free). The result? Those hosts who used the better quality images received two or three times more bookings than those who did not.

In this example, they were using the concierge test methodology to check their assumption. In this kind of experiment, a service or additional value is delivered manually to a selected group of customers, mimicking the intended experience (in this case, recommending professional-quality images be used by all hosts) to confirm its impact before investing significant resources into full-scale development.

Thanks to their experiment, the Airbnb founders had confirmed something useful about their product which would enhance the experience of those using it – the UX.

Practical tips for running product experiments

First, let’s look at what we want from an experiment: evidence, either that our assumption is correct, or that it isn’t. But not all evidence is equal.

Weak evidence is:

  • Based on opinions
  • Asks, what do people say?
  • Comes from a lab setting
  • Tends to be a small investment

Whereas strong(er) evidence is:

  • Based on facts
  • Asks, what do people do?
  • Comes from a real-world setting
  • Tends to require a larger investment

Clearly, it is essential to design and conduct experiments that provide stronger evidence for our assumptions. However, experiments resulting in weak evidence are not necessarily bad or to be completely avoided. They are often faster and more cost-effective to execute, and they can provide valuable insights that guide future experiments.

In other words, a recommended strategy is to combine experiments, starting with those that yield weaker evidence to validate the experiment’s direction, and then progressing towards experiments that generate stronger evidence. For instance, beginning with user interviews, followed by search trends analysis, online ads, a simple landing page, an email campaign, and ultimately pre-sale and concierge testing.

Two more top tips for product assumption experiments and tests

Tip #1 is to visualize your assumptions - for clarity. After all, you need to know what they are before you can test them. Fortunately, there are many tools available to help you unearth possible product assumptions, including:

  • Assumption mapping
  • Value proposition canvas
  • Lean canvas
  • Business model canvas
  • Customer journey mapping

The key is to identify high-risk assumptions (those which are critical to the potential product’s success but for which you currently have little or no evidence) and focus on those.

Tip #2 is to set clear success criteria for each experiment. In other words, set out beforehand the metrics or results that will tell you your assumption is sufficiently proven to justify moving forward with product development.

Having success criteria for your experiment helps to:

  • Show the reality behind your assumption, enabling you to course correct if necessary.
  • Objectively evaluate the results rather than relying on opinions and individual interpretations.
  • Define further action: i.e. do you persevere, pivot, or kill the idea?

What happens if your experiment fails to meet its success criteria? Ideally, you run more experiments in your sequence to confirm that it’s a no-go, and if so, accept that your eureka moment was more aspirational than practical and kill it.

Pitfalls when testing ideas

The problem is, we get attached to our ideas and that can lead us off track. The following are the three main pitfalls to be aware of when running product experiments:

  • DON’T prioritize experiments that we feel excited about – INSTEAD prioritize experiments to test the riskiest assumptions.
  • DON’T get too hung up on proper scientific methodology – INSTEAD generate just enough evidence to make a product decision. (Of course, this is not to say that scientific rigor is bad, just that in the product space, time and resources are also factors.)
  • DON’T fail to act upon experiment results – INSTEAD persevere, pivot or kill the idea based on the evidence you have gathered.

Product assumptions and experiments

That first idea for a new product or feature is always exciting. But “exciting” doesn’t always equate to good business sense, or to a product that users will want. So, we need to identify and test the assumptions that lie behind the excitement.

To summarize: begin with low-cost, rapid experiments to result in enough evidence to indicate a direction for further tests. Obtain preliminary directional evidence, and conduct experiments that provide the strongest evidence possible within the limitations you have. The overall objective is to minimize uncertainty as much as possible before investing resources into implementing your idea and building the product.

That’s why we experiment.

You can also access the complete presentation through our recorded sessions on YouTube.