Building the MVP iteratively – a practical example

Some time ago, I discussed why an MVP should be built iteratively. In this post, I’ll illustrate the benefits of such an approach on a concrete, step by step example.

A product vision

I’m doing a lot of quick research on various Agile-related topics, and the results I google are often not as relevant as I’d like.

I’m interested in articles only, no workshop registration pages, book reviews, consultancy, conferences or tools. I prefer certain length of article, long enough to contain in-depth information but short enough to be read in about 10-15 minutes. I want articles to be up to date, focused on a single narrow topic rather than general, broad ones and there are big bonus points for articles from established authors or organizations, or acknowledged by the Agile community.

So let’s play with the following product idea: a specialized web search engine for finding high quality Agile articles.

A feature list brainstorming

It’s typical for a list of must-have and nice-to-have features for a new product to grow almost infinitely. What can we think of for our search engine?

  • A search mechanism. It should be able to find and rank articles by various criteria (length, focus on a single topic, acknowledged source etc.).
  • A search form. It should have autocomplete and provide advanced search options (length, age, contains images etc.).
  • A list of search results. It should contain both basic and additional information (title, excerpt, URL, date, author, popularity, relevancy, image thumbnail, reading time etc.), it should be sortable by various criteria (date, relevancy, popularity etc.) and it should be paginated.
  • Buttons for sharing articles via social media.
  • Ability to “star” and bookmark articles. A user should be able to see a list of his starred and bookmarked articles (requires user registration and login mechanism). Articles starred by many users should be ranked higher in search results. Articles bookmarked by a given user should be excluded from his search results.
  • A box with promoted (paid) search results.
  • A box with curated search results (staff favorites).
  • A header with branding elements (logo, tagline etc.), a link to page with user’s starred articles and cross-links to our other products and services.
  • A footer with links to additional pages (about, contact etc.).
  • A sidebar with contextual ads.
  • A paid version with no ads (requires various forms of payments: bank transfer, credid card etc.).

I could go on and on (and stakeholders on a real project typically do), but let’s cut the list at this point. It’s already way too long. Let’s consider now, how we can validate if our product concept is compelling to real users at all.

A truly minimal approach

The ideal approach is to implement none of the above features at all. We could start with a Kickstarter campaign or with a consierge approach, e.g. sending an email with the list of articles prepared by hand to a very small group of users. But let’s assume that we’re already past this step and want to build the first, minimal version of the actual product. How do we approach it?

A non-iterative MVP

First, let’s see how we could guess the feature set for an MVP, if we wouldn’t plan to iterate. The question we could ask ourselves is: What minimal combination of features can validate our vision and be perceived by end users as viable?

This question quickly cuts off all the non-core features like paid and curated results, starring, bookmarking and sharing and so on. So what’s still there? And in what order should we implement these features?

Because we won’t build our MVP iteratively – we plan to assess and release the whole product all at once when it’s done – the ordering of features is mostly irrelevant. The easiest and most intuitive order seems to be the “natural” one: first a general page layout, then its contents in the order the user sees and uses them.

So let’s see how the feature set and the order in which we’ll build these features could look like:

A header

We want users to start recognizing our brand, so we’ll implement a very basic header with a logo and tagline. Following the “natural” order of features, a general page layout with a header is the first thing we’ll implement.

A search form

The most obvious feature that must be in the MVP is a search form. To be able to search for a given phrase we must be able to provide this phrase somehow. At first, the simplest possible form with a single input field will suffice. Following the “natural” order, this will be the next feature to implement, as it is the first feature a customer uses on our site.

A search mechanism

Submitting a search form leads to performing a search, so the search engine is intuitively the next feature to implement. But how should it work in the first, minimal version? A full-blown search engine, crawling the whole Internet and ranking articles based on many fancy criteria is definitely too complex and too expensive just to validate our concept. On the other hand, if the search engine will return too few or low quality results, it won’t be really viable and will discourage users.

So where’s the sweet spot? Until we implement it and see the results, it’s hard to guess. On one hand we could crawl just a few hand-picked, popular Agile sites. This would guarantee high quality of articles, but at the risk of returning too few results. On the other hand, we could crawl the whole Internet, but with a very simple algorithm – just a simple keyword and article length match. This would guarantee big enough result set, but at the risk of low article quality.

To be safe, we choose something in-between: crawling the whole Internet, with a bit more complex, contextual search based on the article content, but without ranking popularity, source relevancy, focus on a single topic etc. This solution is not as cheap as we’d like, but at least there is a satisfactory chance it’d be viable.

A list of search results

This is another feature that surely must be in the MVP. As the search results are generated at the end of the process (first a user provides a search phrase, then a search is executed, then results are displayed to the user), following the “natural” order this will be the last feature we’ll implement.

The question is, how it should be built? At first, we may display only the most basic article data: title, excerpt and URL, as they are easiest and cheapest to extract. We’ll also drop the sorting options for now, and sort results always by relevancy.

It also seems intuitive that we should limit the number of results displayed on a single screen, and implement some kind of a pagination or infinite scrolling. For starters, we decide to choose pagination, as it seems simpler to implement. We guess the optimal number of items displayed on a single screen to be 10 (we want to display excerpts, so items will probably be relatively big – thus the low number of results per screen).

Having completed all the features, we deploy our MVP to production, to finally see if our guesses were right.

An iterative MVP

Now let’s see how an iterative approach could impact the way we build a product.

First of all, it changes the question we ask ourselves. There’s no point any longer in asking (guessing) what feature set will be viable, as we’ll assess this each iteration. The question we’ll be asking now is: What’s the most valuable feature to build next? We’ll build this feature and evaluate if the product is already viable; if it’s not viable yet, we’ll ask again what’s the most valuable feature right now; if it’s viable, we have our MVP and we can release it.

This has a big impact on the order in which we build features. The “natural” order is no longer suitable, as it doesn’t support building the most valuable feature first. So let’s see what the features and their order could be when we focus on the value:

A list of search results

The most important feature is, hands down, a list of search results. The results are what a user visits a search engine for.

It may seem counterintuitive to build a list of results first. Such a list is definitely not viable without the search form to enter a search phrase. It seems not even reviewable without a search form. However, this is not true. We may generate search results e.g. by manually manipulating parameters in the URL. Such a solution is of course not viable to be delivered to real users, but is perfectly viable to be reviewed by our stakeholders or to be run through usability tests.

When iterating, we don’t have to implement all the elements that make the result list viable from the start. So at first we don’t need pagination and we don’t have to guess the correct number of items per page – we may just display a single page with arbitrarily chosen number of items.

To see how the list looks like, we don’t even need to connect a real search mechanism – we may test the list layout with a dummy mechanism, returning a fixed list of articles.

A list of search results – feedback after review

We review our result list, and get interesting feedback: excerpts don’t provide enough information to be useful. Therefore, we remove them and leave only titles. Seeing the final layout of the result list with the titles only, we can easily decide about the optimal number of results per page: 50, instead of our initial guess of 10.

For now, we still don’t build pagination, because it’s related not only to results list, but also to the search mechanism that’s not yet implemented.

A search mechanism

What’s the next most valuable feature? We have our initial search results list in place, so now it’s time to return real search results. When working non-iteratively, we were afraid to return too few results, so we decided to make broader search, just in case. When working iteratively, we can broaden the search scope gradually, so it’s not a problem to start with a very narrow search scope – we’ll be crawling only a single, popular Agile blog.

An added benefit of searching only a known, popular site is that it’ll automatically guarantee high quality of articles, so we don’t have to implement a fancy algorithm to calculate relevancy.

A search mechanism – feedback after review

The review clearly shows, that for most search phrases crawling only a single blog returns too few results. We improve this iteratively, adding second hand-picked blog, reviewing, adding third blog, reviewing, and so on. After a couple of such quick iterations, we settle on 12 top Agile blogs. This guarantees enough of results and is still way simpler and cheaper to implement than crawling the whole Internet.

An interesting side effect of all these reviews is an observation, that most interesting and relevant articles are typically the first 20-30 ones. Because our result list layout accomodates up to 50 articles on a single screen, it means that for now we don’t have to implement pagination at all – we can safely cut the result list at 50 items, even if it is longer, without much harm to the quality of the results.

A search form

The next most valuable feature at this point is a search form. We already have a working search, now it’s time to provide a way for customers to use it.

Our initial guess about the search form was already quite basic, so theres no room to simplify anything here. We just implement a single-input form to provide a search phrase.

A search form – feedback after review

Although we’ve implemented the simplest search form possible, its review still reveals an interesting insight. One of stakeholders noticed, that the form looks a bit plain, and because it consists only of a single field, there’s a lot of space on the screen for visual decoration. Hence the idea to include branding elements directly in the search form, instead of a separate header.

With the header becoming obsolete, at this point we assess that we have all the features necessary for the first release – our MVP is complete! However, the review revealed one more interesting observation:

A footer

One of the more legalese-savvy stakeholders noticed during the review, that we don’t have a privacy policy and terms of use anywhere – and they are legally required on some markets. In effect, we decided to implement a very simple footer with links to these two documents.

Now our MVP is really complete, and we deploy it to production.

Summary

As you can see, an iterative approach resulted in a more minimal feature set, let us fine-tune our search algorithm better, made us more assured that the product is really viable and even let us discover an important feature we would miss if we were just guessing.

Of course, this is an artificial example, but similar positive effects of iterative development happen also on real projects – quite often even on a much bigger scale!

What are your experiences with iterative MVP development, or iterative development in general? What benefits did you notice? Please drop a comment and share your thoughts!

Advertisements

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s