One of the most important things that I learned while working with startups is that choosing the right feature set can be difficult. This is especially true when you don't have a complete picture of your 'ideal' customer. Even for established companies, preventing the dreaded feature creep can be difficult. Defining what is necessary for the launch of a product or feature is extremely difficult. Without something like Minimum Viable Product, you're left lost in a sea of potential requirements.
For those not familar with the MVP framework, it's a simplified way of thinking about your first version of a product (or feature). It was made famous by Lean Startup gurus Steve Blank and Eric Ries. The fundamental idea is that startups have a hard time defining what should go into their first version of a product. Too many entrepreneurs struggle with wanting to build the perfect product right out of the gate. This delays launches, jeopardizes go-to-market strategies, and, more often than not, leads to products that are way off base from what customers want. Minimum Viable Product ties development and design back to the iterative testing methodology that I'm so keen on. You push out an initial version of your product and gather extensive customer feedback. Then you iterate on that product until you gather momentum and market share.
What has always baffled me about the MVP framework is that it is traditionally limited to just startups. What about this interative approach lends itself solely to entreprenueurial ventures? That's why I've started using this methodology to spec out larger feature requests in our Development pipeline. Outlining the minimum criteria for success was something that we had never done before within my organization. What we ended up finding was a fantastic way to view the large projects we seek to undertake. This is what I've learned so far.
Why we started approaching feature requests through an MVP lense
One of the reasons that this framework works so well for my company is that we are all familiar with the Lean Startup methodology. That allowed us to hit the ground running, which was great as our backlog is ever growing. We began this process by outlining the larger epics that fit into our roadmap. What were the 3-6 month tasks that had a) no previously outlined minimum criteria for success and b) a spot on our roadmap for 2018 and early 2019. By identifying these epics, we found that we had four to six major undertakings that could be handled in the next six months. However, we had no direction on which one should take priority.
What I initially discovered was that many of these feature requests were poorly outlined and there were a number of assumptions that we had left untested. We could most likely take on one of these epics and land on something that some or most of our customers would like. It would certain be better than what was present in the platform at the time we started. However, we couldn't be certain as to which solution would get us the most impact for the least amount of effort. And we also could be sure how much work any of these epics would end up being, as the definitions were still a bit squishy. So we decided to have some initial meetings around one of these feature requests, a potential complete revamp of our reporting engine.
I spent two weeks completing customer interviews to talk to our current user base to see how they utilize our reports. I found a lot of amazing pieces of information about what people did and did not like with our current reporting capabilties. I explored extended use cases for reports with other departments that aren't a part of our core user group. I also spoke to frustrated customers who further validated that we had a major issue on our hands. So the next thing I did was schedule a meeting with our leadership team to discuss my finding and talk about a solution. That was a mistake.
The initial meeting was unproductive to say the least. There was no clear direction on what exactly would be included in the scope of the project. Were we talking about a full rewrite or just a tune-up of the underlying structure? Could we add new features to the existing reports structure to solve the problems we wanted to address? Was it just better to completely scrap reports and start over? What about the extended use cases and features for power users, were those even possible using the current engine? I remember leaving that meeting with no real direction, feeling like I had wasted several days worth of my time.
How we starting using the MVP framework
That is when I decided to begin formulating a plan for thinking about this project. How could I get a small group of people in a room and accurately discuss the scope of a reports revamp? A meeting without a need for discussing what was currently possible in our existing product. One in which I could simply define the what without having to discuss the how. I chose to fall back on the MVP framework since the scope of the project could be massive, and there were still a lot of things we didn't know. The way that I achieved this was simple to begin with.
We started with three columns outlined on a white board: Needs, Questions, and Wants. These were the only places that features from this meeting could fit into. Either we knew that we needed them to be a part of an MVP, or we knew that they didn't. Anything that was a lingering question which kept us from moving forward went into the middle under Questions. This was a living document (extremely important) and our first meeting was just to get items outlined initially.
We started by listing out every feature that we could possibly want within this epic. Everything was placed on Post-It notes and was discussed with the group. I tried to steer people away from initially proposing importance within their features. Just discuss the feature in as detailed a language as possible. We weren't trying to place anything in a set spot just yet, just define the potential items for the project. We broke larger features into smaller requirements and discussed additional features that could arise if we undertook some of the necessary portions of this epic. Any items that took more than a few minutes to discuss were put into the Questions column. That way we could focus on customer feedback driving those decisions, rather than what we assume a customer will want.
Once we felt that we had captured the breadth of the requests, we began placing the Post-Its in the appropriate columns. Would this task be necessary for a first pass at the reporting revamp? If so, then why? We tried to focus on only what was truly important to have in a report within our software. Any other items could be addressed at a later time, even if they didn't intially address every major issue our customers were facing. There were a lot of features outlined after this first pass, but only a handful ended up in the Needs column. Everything else was listed as a lingering Question or it was placed under the Wants.
How we defined Needs and Wants
It's important to note that within this framework, the MVP that the Development team builds doesn't have to be devoid of the items within the Wants column. The distinction here is to outline what is the minimum criteria for a first pass at the product. If it ONLY did these things, it would be a great first step in the right direction. Anything in the Wants column is still fair game for the Development team to tackle during the first buildout. However, they don't have to feel obligated to do so. There were several benchmarks that were outlined in the Wants column which could feasibly be met in the first version of the feature. If they weren't though, that was OK.
The way that we went about priortizing these requirements for the epic was that we came to an understanding early on as to what we wanted to achieve. We weren't looking to completely determine what was going to be a part of the feature in the long term. We just wanted to agree on what would get us the most use within a 1-2 month Sprint. Outside of that, I would test the first version extensively with customers and gather much needed feedback. Were our choices off base? What other features and requirements did this bring up with customers now that the reporting capabilities were more robust? These were the questions we couldn't answer yet, and why we needed an initial version of the product to test out.
It took me entirely too long to accurately convey this to my team, but once I did, it took a lot of the pressure off of things. We weren't setting anything into stone as to what the product would be over the next year, just what we wanted out of the first version. That helped people feel more comfortable setting items off to the side as Wants instead of Needs. There is a lot of investment for certain features to get into a product, and this framework helps bring those to the surface and give them a place. It's essential to make sure that everyone is heard. It's also important that everyone understands that just because a feature is placed in the Wants column doesn't mean that it's unimportant.
How has the Minimum Viable Product mindset helped us create better product requirements?
Overall, the process of drafting and outlining these requirements has helped us immensely with creating strong product requirements. I have now converted an entire conference room wall over to outlining the major epics that we could possibly undertake in the future. As we begin discussions about an MVP, we can easily see all of our projects in one place. This helps us notice interconnectivity more easily, and it allows us to add follow-on Wants and Needs as time allows.
I can't say that our process is ironclad just yet, but the results have been extremely positive. We have outlined 3 of our epics so far and have identified two hidden projects that could arise from the changes we make in these epics. It has also given me a list of items to validate with customers as I discuss features with them throughout the week. I frequently find myself drifting to our product wall when someone says something that sparks a question in my mind. This has allowed me to have a clear picture of the assumptions that we have no validation for, allowing me to more effectively ask probing questions around these items.
My approach to integrating the MVP methodology into our product discussions is nowhere near perfect yet. However, it has saved us countless Dev Team meetings around what we could build, and has since allowed us to intelligently discuss what we will build. Hopefully the user testing that comes out of these first two epics will allow us to refine this process even more. If you are doing something similar with your team, I'd love to hear about it. Or if you try this out yourself, let me know!