There is a version of "build fast" that kills startups. It's the version where a founder, energised by an idea and anxious to show progress, skips the uncomfortable questions, piles on features that feel important before any user has validated them, ships something that technically works, and six months later is sitting in front of investors with a product that real users won't pay for β€” built on a codebase so tangled that adding anything new costs three times what it should.

The startup graveyard is full of this version.

Then there's the version of "build fast" that works. It's not faster in the sense of cutting corners. It's faster in the sense of being ruthlessly focused β€” doing the right things in the right order, building only what needs to exist, and making architectural decisions that won't become obstacles the moment the product finds its first users and needs to grow.

This post is about that version. It's our actual process for taking a founder from a validated idea to a working product in six weeks β€” and why the speed comes from discipline, not shortcuts.

πŸ’‘ The numbers context: 90% of startups fail. The leading cause β€” accounting for 42% of failures β€” is building a product there was no market need for. The second leading cause is running out of money, usually because too much of it was spent building the wrong thing. An MVP done right is the most direct answer to both of these problems simultaneously.

What an MVP Actually Means β€” and What It Doesn't

Before the process, a clarification β€” because "MVP" is one of the most misused terms in the startup world, and the misunderstanding has real consequences.

An MVP is not a rough, buggy version of your full product. It is not an excuse to ship something broken and call it intentional. It is not a prototype with the sharp edges left in. Those approaches produce poor first impressions that damage your brand with the early users who matter most, and they generate unreliable feedback because users are reacting to the roughness rather than the core value proposition.

An MVP is the smallest complete product that can validate your most important assumption with real users. It does one thing well β€” the specific thing that is the core of your value proposition β€” and does it in a way that feels deliberate and considered, even if many surrounding features are absent.

Studies show that roughly 64% of features in software products are rarely or never used. An MVP forces you to find the 20% of features that deliver 80% of the value β€” and to build only those. The constraint is the point.

The other thing an MVP is not: a throwaway. We build MVPs with the architecture to grow. Not over-engineered for a scale that doesn't exist yet, but not built in ways that will require a full rewrite the moment the product gets traction. Technical debt that's acceptable pre-revenue becomes a serious liability post-revenue. The architecture decisions made in week 2 of an MVP build have implications for the next two years of development.

Before Week One: The Conversation Most Teams Skip

Our six-week clock doesn't start when we open a code editor. It starts when we have enough clarity to build the right thing. Getting to that clarity takes focused work upfront β€” and it's the part of the process that most teams either skip entirely or compress into a 30-minute call.

We run a structured discovery session with every founder before scoping an MVP. The questions aren't about features. They're about assumptions.

What is the single most important thing this product needs to prove? Not "prove it works" β€” prove what, specifically? That users will pay for this? That they'll use it more than once? That it solves the problem better than the current alternative? The answer to this question determines what the MVP must include. Everything else is optional.

Who is the specific first user? Not "SMEs" or "marketing teams" β€” one specific type of person, in a specific context, with a specific problem. The more precise the target user, the more useful the feedback from the MVP. A product that tries to serve everyone in the first version serves no one well enough to learn from.

What does the user currently do instead? Understanding the existing behaviour β€” the spreadsheet, the manual process, the workaround β€” is essential for designing the MVP. The MVP doesn't need to be perfect; it needs to be better than what the user currently does. That bar is often more achievable than founders assume.

What would make this MVP a success? A specific, measurable definition β€” not "positive feedback" or "users like it." A defined number of sign-ups, a retention rate above a threshold, a conversion from free to paid, a specific piece of qualitative feedback that validates the core assumption. Without a success definition upfront, you can't know whether the MVP has done its job.

This conversation typically takes two to three hours. The output is a one-page brief: the core assumption being tested, the target user, the success definition, and the feature boundary β€” what's in the MVP and, equally important, what's explicitly not in it.

The Six-Week Process

Week 1 β€” Architecture and Design

With the brief in hand, week one is about making the decisions that everything else depends on. We don't start writing application code in week one. We make the decisions that prevent us from having to rewrite it in week six.

Tech stack selection. We select the stack based on the product's requirements, not on what's fashionable. The core criteria: a stack the team knows deeply (no learning curves mid-build), a stack that can scale beyond the MVP without a rewrite, and a stack with strong ecosystem support for the integrations the product will need. For most web products, a Python or Node.js backend with a relational database and a clean API layer covers 90% of requirements at MVP stage.

Architecture decisions. We design for what the product needs to be in 18 months, not just what it needs to do in week six. This doesn't mean building microservices and distributed systems for a product with zero users. It means making sensible choices β€” a clean separation between the API layer and the frontend, a database schema that accommodates the obvious next set of features, authentication that doesn't need to be rebuilt from scratch when you add teams or permissions. A modular monolith approach is usually right: build fast today, with the door open for complexity later.

Design and user flows. We design the core user flows β€” the two or three paths a user will take through the product β€” before building them. This catches UX problems when they're cheap to fix (on a whiteboard or in a design file) rather than expensive to fix (after they've been built). The design for an MVP should be simple and clear, not minimal and confusing.

Weeks 2–4 β€” Core Build

Three weeks of focused development on the features defined in the brief. Nothing else.

The discipline required here is underestimated by most founders who haven't been through an MVP build before. Good ideas will surface during the build β€” features that seem obviously valuable, edge cases that would be elegant to handle, integrations that would be really useful. Every one of them goes into a backlog. None of them gets built during weeks 2–4.

Scope creep is the single most common cause of MVP delays. Each addition feels small. Collectively they push a 6-week build to 12, and a 12-week build to "we're still not ready." The brief from the discovery session is the contract. Features not in the brief do not get built.

What does get built is built properly:

  • Core user flows built end-to-end β€” not half-finished screens or placeholder logic. Each core flow works completely by the end of week 4
  • Automated tests for critical paths β€” not full test coverage, but tests for the paths that matter most. An MVP that breaks on the main user journey loses users permanently
  • Basic monitoring from day one β€” error logging, uptime monitoring, basic analytics. You can't improve what you can't measure
  • Deployment infrastructure set up in week 2, not week 6 β€” so every day of development ends with working code in a real environment, not a local machine

Week 5 β€” Integration and Testing

Week five is where the product becomes whole. Individual features that were built and tested in isolation get connected, tested together, and subjected to the scenarios that real users will actually create β€” including the ones developers don't think of because they know too much about how the product works.

We run the product through structured testing against the core user flows defined in week one. We look for the friction points β€” places where the UI is unclear, where an error message is unhelpful, where a flow requires more steps than it should. These get fixed before launch, not after.

Performance testing happens this week too. Not load testing for thousands of concurrent users β€” the MVP won't have thousands of concurrent users. But basic checks: does the product load quickly? Are there any obviously slow queries? Does it behave well on a mobile connection? First impressions are made in seconds, and a slow product with a good core idea loses to a fast product with a good core idea every time.

Week 6 β€” Launch Preparation and Handover

The last week is about making the product ready for real users, not just functional in a test environment.

Production infrastructure is reviewed and hardened. Backups are configured. Security basics are in place β€” authentication, input validation, dependency vulnerability scanning. For products handling personal data, data handling compliance is reviewed and the appropriate notices and controls are in place. This matters both ethically and practically: GDPR compliance is not optional for products targeting EU users, and getting it wrong at launch is significantly more damaging than taking the time to get it right.

Documentation is written β€” not extensive internal documentation, but the essential guides: how to deploy the product, how to monitor it, how to diagnose the most common problems. Founders who receive a handover without documentation are one developer departure away from being unable to maintain their own product.

Analytics are verified to be tracking the metrics that map to the success definition from the discovery session. If the success metric is activation rate, the activation event is being tracked. If it's return visits, session tracking is in place. Data from the first users should be immediately usable, not require a retroactive instrumentation project after launch.

The Technical Debt Question

Technical debt deserves its own section because the advice around it is frequently wrong in both directions.

One camp says: ignore technical debt in the MVP, move fast, clean it up later. The problem with this advice is that "later" rarely arrives. Post-launch, the pressure to add features is intense and the incentive to clean up working code is low. Debt that was acceptable in the MVP compounds β€” every feature added to a poor foundation is harder to build, creates more potential for bugs, and makes the next hire slower to onboard. Teams spend more time firefighting than building. The product that was supposed to be temporary becomes the foundation of the company.

The other camp says: build it properly from the start, don't compromise on code quality. The problem here is that "properly" often becomes "over-engineered for a scale that doesn't exist." Microservices for a 50-user MVP. Complex caching for a product that barely hits a database. Abstractions that add indirection without adding value. This slows the build, delays the learning, and burns runway before you know whether the product is worth scaling.

The right position: build simply but build well. Simple architecture that's clean and coherent is maintainable. Complex architecture that's clean and coherent is premature. Messy architecture of any complexity is a liability. The goal is code that a new developer can read and understand within a day β€” not code that required a senior engineer to write, but code that doesn't require one to maintain.

What Six Weeks Actually Requires

Six weeks from brief to working product is achievable for most MVPs β€” but it requires conditions that not every project has from the start.

A locked scope. The six-week timeline is based on a specific feature set. Scope changes mid-build extend the timeline proportionally. A founder who adds three features in week three doesn't have a 6-week MVP β€” they have a 9-week one.

Rapid decision-making. Builds stall when decisions about design, copy, integrations, or requirements take days to resolve. We set a 24-hour turnaround expectation on decision requests. Founders who are responsive move faster; founders who need a week to decide on button copy extend their own timelines.

A clear first user. The most efficient builds are ones where we can test features with a real target user during the build, not after it. A founder who has five potential early users willing to look at work-in-progress screens every fortnight gets better feedback, faster, than one who is building in isolation for six weeks and launching into the dark.

Trust in the process. The hardest part of an MVP build for most founders is restraint β€” resisting the impulse to add features that feel important before users have validated that they are. The founders who get the most out of a focused MVP build are the ones who genuinely commit to the scope, launch, learn, and then decide what to build next based on evidence rather than instinct.

⚠️ The feature trap: Studies show that around 64% of features in software products are rarely or never used. In an MVP, every feature you add is another week of development, another thing that can break, another source of user confusion, and another reason your launch date slips. The question to ask about every proposed feature is not "would this be useful?" Almost everything is useful to someone. The question is "does this feature test our core assumption?" If the answer is no, it goes in the backlog.

After Launch: The Part Most Posts Don't Cover

Launching the MVP is not the end of the process. It's the beginning of the useful part.

The first two weeks after launch are the most information-dense period in the product's life. Users who encounter the product for the first time, without your assumptions and context, will interact with it in ways you didn't anticipate. Some features you thought were important will go unused. Some things you built quickly will turn out to be the parts users care about most. The gap between what you assumed users would do and what they actually do is the most valuable data you have.

The success definition from the discovery session becomes the lens for evaluating this data. Not "do users seem to like it?" β€” that question is too vague to act on. "Are users completing the core activation flow?" "Are they returning after the first session?" "What are they doing immediately before they drop off?" These questions have answers, and those answers should directly shape what gets built next.

The MVP is not done when it launches. It's done when it has answered the question it was built to answer β€” and by then, you'll know exactly what to build next, because users will have shown you.

The Takeaway

Six weeks from idea to working product is achievable. But the speed isn't the point β€” the point is that six focused, disciplined weeks of building the right thing beats six months of building everything and hoping the right thing is in there somewhere.

The founders who build the most successful products aren't the ones who built the most features. They're the ones who stayed close to their users, validated their assumptions early, and made architecture decisions that let them move quickly as the product grew β€” rather than progressively slower as the technical debt accumulated.

If you're sitting on an idea that's ready to become a product, or you're evaluating whether your current MVP approach is going to get you where you need to go, that's a conversation worth having early.

Talk to us about building your MVP β†’