The Minimum Viable Model Framework: A 5-Step Guide for Successfully Building AI Products

By Bruno Dagnino / 2024-11-25

Introduction

As an AI entrepreneur and product builder with over a decade of experience, I've seen firsthand many triumphs and pitfalls. From computer vision to natural language processing, geospatial data to generative AI, I've seen what works and what doesn't. One overarching theme where many efforts fail is overcomplicating things. It's easy to get caught up in building complex tech, only to find that your AI product never sees the light of day, is significantly delayed, or fails to deliver real value to users.

If you've been burned by this, you know the pain I'm talking about. And if you haven't, wouldn't you like to avoid it? In this post, I'll introduce the concept of a Minimum Viable Model (MVM) and share a simple 5-step framework to help you efficiently develop and deploy your first AI product version, so you can iterate and improve from there. By following this approach, you can save time, resources, and avoid common pitfalls in AI product development.

The Minimum Viable Model (MVM) Concept

In the same spirit of a Minimum Viable Product (MVP), a Minimum Viable Model (MVM) is all about releasing fast, learning, and iterating. An MVM is the simplest version of an AI model that can be deployed to start gathering real-world data and user feedback. The goal is to start simple, test assumptions, and iterate based on feedback, rather than trying to build a perfect model from the outset.

The 5-Step Framework for Building an MVM

  1. Develop your intuition and test it
    • Gain a sense of what's possible and the effort required. This is not about scoping out all the possible approaches and reading all the literature on the topic. It's simply about being able to answer the question: "Can AI do this yes or no, and if yes, how big of an effort will it take to get working, small, medium, or large?".
    • If you're new to the area, start by watching YouTube videos, reading posts, taking courses, or getting advice from a consultant. You don't need to read 100 papers to build your intuition!
    • Test and experiment to get feedback on your intuition and gather evidence. Create simple experiments or prototypes to validate your assumptions and determine the feasibility of your concept. You might not even need to write any code for this. For example, if you want to test data extraction with LLM, you can start by trying it out with any of the available chat interfaces.
  2. Build and evaluate a v0
    • Get to your "first data point in the plot." It's not about nailing performance from the get-go, but about developing a framework to systematically measure how good your solution is.
    • Start simple. Put together a simple Eval or Test Dataset, the necessary code to run a first version of the model on them, and a simple metric that gives a high-level sense of the solution's performance.
    • Once you have these, you can put your first data point on the plot. On the Y-axis, plot performance; on the X-axis, plot the different version/iterations. After this step, you'll have only one version, but as you iterate over time, you can add new versions and get a sense of whether you're improving and by how much.
  3. Iterate or deploy
    • If the first version's performance is good enough, go ahead and deploy it. Most likely, though, it won't be, and you'll need to iterate until you get there. This is where the framework we built in step #2 comes in handy.
    • Avoid overthinking what "good enough" is. Aim for a general range and try it out. If you feel like it's there, for most solutions, it means you're ready to deploy.
  4. Incorporate real-world use cases
    • One of the best things about deploying an MVM early is that you can feed the system with the data types and use cases your users are testing. Even when things fail, you can get that feedback and use cases into your eval or test datasets to improve your models. Just as you can get user feedback from a scrappy MVP, you can get "data feedback" from an MVM.
    • To do this, you need to monitor your deployment. Set up minimum monitoring, whether manual or automated, to access the data you need to iterate.
  5. Know when to stop
    • Unlike with products where there is always more features to add, with models there might be a moment when the right call will be to stop iterating and trying to improve the model. That moment will change depending on the size of the company you are, the stage of product development you're in, or you might not have the data you need. Before every new model iteration, always double-check: is another iteration the right thing to do?

Conclusion

Building successful AI products doesn't have to be complicated. By adopting the Minimum Viable Model approach and adhering to this simple 5-step framework, you can efficiently develop and deploy your AI product while minimizing the risk of overcomplicating things. Remember, the key is to keep things simple at every stage until you achieve initial results, then iterate from there. In upcoming posts, we'll dive deeper into each of these 5 steps, providing you with the insights and tools needed to build AI products that deliver real value to your users. Stay tuned for more on how to develop intuition, build and evaluate a v0, iterate or deploy, incorporate real-world use cases, and know when to stop. With the MVM approach as your guide, you'll be well on your way to creating AI products that drive results.


Work with us

At Limai Consulting we are starting our consulting journey. As such, over the next month we will be taking Free consulting calls to help advice companies on how to approach their AI Product Development and Strategy. If you are curious to know how the Minimum Viable Model framework applies to you or if more in general you'd like work with us or learn more about this please book a free call to chat about it!