Why hypothesis-driven development is key to DevOps

gears and lightbulb to represent innovation

Opensource.com

The definition of DevOps, offered by  Donovan Brown is  "The union of people , process , and products to enable continuous delivery of value to our customers. " It accentuates the importance of continuous delivery of value. Let's discuss how experimentation is at the heart of modern development practices.

hypothesis driven development example

Reflecting on the past

Before we get into hypothesis-driven development, let's quickly review how we deliver value using waterfall, agile, deployment rings, and feature flags.

In the days of waterfall , we had predictable and process-driven delivery. However, we only delivered value towards the end of the development lifecycle, often failing late as the solution drifted from the original requirements, or our killer features were outdated by the time we finally shipped.

hypothesis driven development example

Here, we have one release X and eight features, which are all deployed and exposed to the patiently waiting user. We are continuously delivering value—but with a typical release cadence of six months to two years, the value of the features declines as the world continues to move on . It worked well enough when there was time to plan and a lower expectation to react to more immediate needs.

The introduction of agile allowed us to create and respond to change so we could continuously deliver working software, sense, learn, and respond.

hypothesis driven development example

Now, we have three releases: X.1, X.2, and X.3. After the X.1 release, we improved feature 3 based on feedback and re-deployed it in release X.3. This is a simple example of delivering features more often, focused on working software, and responding to user feedback. We are on the path of continuous delivery, focused on our key stakeholders: our users.

Using deployment rings and/or feature flags , we can decouple release deployment and feature exposure, down to the individual user, to control the exposure—the blast radius—of features. We can conduct experiments; progressively expose, test, enable, and hide features; fine-tune releases, and continuously pivot on learnings and feedback.

When we add feature flags to the previous workflow, we can toggle features to be ON (enabled and exposed) or OFF (hidden).

hypothesis driven development example

Here, feature flags for features 2, 4, and 8 are OFF, which results in the user being exposed to fewer of the features. All features have been deployed but are not exposed (yet). We can fine-tune the features (value) of each release after deploying to production.

Ring-based deployment limits the impact (blast) on users while we gradually deploy and evaluate one or more features through observation. Rings allow us to deploy features progressively and have multiple releases (v1, v1.1, and v1.2) running in parallel.

Ring-based deployment

Exposing features in the canary and early-adopter rings enables us to evaluate features without the risk of an all-or-nothing big-bang deployment.

Feature flags decouple release deployment and feature exposure. You "flip the flag" to expose a new feature, perform an emergency rollback by resetting the flag, use rules to hide features, and allow users to toggle preview features.

Toggling feature flags on/off

When you combine deployment rings and feature flags, you can progressively deploy a release through rings and use feature flags to fine-tune the deployed release.

See deploying new releases: Feature flags or rings , what's the cost of feature flags , and breaking down walls between people, process, and products for discussions on feature flags, deployment rings, and related topics.

Adding hypothesis-driven development to the mix

Hypothesis-driven development is based on a series of experiments to validate or disprove a hypothesis in a complex problem domain where we have unknown-unknowns. We want to find viable ideas or fail fast. Instead of developing a monolithic solution and performing a big-bang release, we iterate through hypotheses, evaluating how features perform and, most importantly, how and if customers use them.

Template: We believe {customer/business segment} wants {product/feature/service} because {value proposition}. Example: We believe that users want to be able to select different themes because it will result in improved user satisfaction. We expect 50% or more users to select a non-default theme and to see a 5% increase in user engagement.

Every experiment must be based on a hypothesis, have a measurable conclusion, and contribute to feature and overall product learning. For each experiment, consider these steps:

  • Observe your user
  • Define a hypothesis and an experiment to assess the hypothesis
  • Define clear success criteria (e.g., a 5% increase in user engagement)
  • Run the experiment
  • Evaluate the results and either accept or reject the hypothesis

Let's have another look at our sample release with eight hypothetical features.

hypothesis driven development example

When we deploy each feature, we can observe user behavior and feedback, and prove or disprove the hypothesis that motivated the deployment. As you can see, the experiment fails for features 2 and 6, allowing us to fail-fast and remove them from the solution. We do not want to carry waste that is not delivering value or delighting our users! The experiment for feature 3 is inconclusive, so we adapt the feature, repeat the experiment, and perform A/B testing in Release X.2. Based on observations, we identify the variant feature 3.2 as the winner and re-deploy in release X.3. We only expose the features that passed the experiment and satisfy the users.

Hypothesis-driven development lights up progressive exposure

When we combine hypothesis-driven development with progressive exposure strategies, we can vertically slice our solution, incrementally delivering on our long-term vision. With each slice, we progressively expose experiments, enable features that delight our users and hide those that did not make the cut.

But there is more. When we embrace hypothesis-driven development, we can learn how technology works together, or not, and what our customers need and want. We also complement the test-driven development (TDD) principle. TDD encourages us to write the test first (hypothesis), then confirm our features are correct (experiment), and succeed or fail the test (evaluate). It is all about quality and delighting our users , as outlined in principles 1, 3, and 7  of the Agile Manifesto :

  • Our highest priority is to satisfy the customers through early and continuous delivery of value.
  • Deliver software often, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  • Working software is the primary measure of progress.

More importantly, we introduce a new mindset that breaks down the walls between development, business, and operations to view, design, develop, deliver, and observe our solution in an iterative series of experiments, adopting features based on scientific analysis, user behavior, and feedback in production. We can evolve our solutions in thin slices through observation and learning in production, a luxury that other engineering disciplines, such as aerospace or civil engineering, can only dream of.

The good news is that hypothesis-driven development supports the empirical process theory and its three pillars: Transparency , Inspection , and Adaption .

hypothesis driven development example

But there is more. Based on lean principles, we must pivot or persevere after we measure and inspect the feedback. Using feature toggles in conjunction with hypothesis-driven development, we get the best of both worlds, as well as the ability to use A|B testing to make decisions on feedback, such as likes/dislikes and value/waste.

Hypothesis-driven development:

  • Is about a series of experiments to confirm or disprove a hypothesis. Identify value!
  • Delivers a measurable conclusion and enables continued learning.
  • Enables continuous feedback from the key stakeholder—the user—to understand the unknown-unknowns!
  • Enables us to understand the evolving landscape into which we progressively expose value.

Progressive exposure:

  • Is not an excuse to hide non-production-ready code. Always ship quality!
  • Is about deploying a release of features through rings in production. Limit blast radius!
  • Is about enabling or disabling features in production. Fine-tune release values!
  • Relies on circuit breakers to protect the infrastructure from implications of progressive exposure. Observe, sense, act!

What have you learned about progressive exposure strategies and hypothesis-driven development? We look forward to your candid feedback.

User profile image.

Comments are closed.

Related content.

Working on a team, busy worklife

how-implement-hypothesis-driven-development

How to Implement Hypothesis-Driven Development

Remember back to the time when we were in high school science class. Our teachers had a framework for helping us learn – an experimental approach based on the best available evidence at hand. We were asked to make observations about the world around us, then attempt to form an explanation or hypothesis to explain what we had observed. We then tested this hypothesis by predicting an outcome based on our theory that would be achieved in a controlled experiment – if the outcome was achieved, we had proven our theory to be correct.

We could then apply this learning to inform and test other hypotheses by constructing more sophisticated experiments, and tuning, evolving or abandoning any hypothesis as we made further observations from the results we achieved.

Experimentation is the foundation of the scientific method, which is a systematic means of exploring the world around us. Although some experiments take place in laboratories, it is possible to perform an experiment anywhere, at any time, even in software development.

Practicing  Hypothesis-Driven Development  is thinking about the development of new ideas, products and services – even organizational change – as a series of experiments to determine whether an expected outcome will be achieved. The process is iterated upon until a desirable outcome is obtained or the idea is determined to be not viable.

We need to change our mindset to view our proposed solution to a problem statement as a hypothesis, especially in new product or service development – the market we are targeting, how a business model will work, how code will execute and even how the customer will use it.

We do not do projects anymore, only experiments. Customer discovery and Lean Startup strategies are designed to test assumptions about customers. Quality Assurance is testing system behavior against defined specifications. The experimental principle also applies in Test-Driven Development – we write the test first, then use the test to validate that our code is correct, and succeed if the code passes the test. Ultimately, product or service development is a process to test a hypothesis about system behaviour in the environment or market it is developed for.

The key outcome of an experimental approach is measurable evidence and learning.

Learning is the information we have gained from conducting the experiment. Did what we expect to occur actually happen? If not, what did and how does that inform what we should do next?

In order to learn we need use the scientific method for investigating phenomena, acquiring new knowledge, and correcting and integrating previous knowledge back into our thinking.

As the software development industry continues to mature, we now have an opportunity to leverage improved capabilities such as Continuous Design and Delivery to maximize our potential to learn quickly what works and what does not. By taking an experimental approach to information discovery, we can more rapidly test our solutions against the problems we have identified in the products or services we are attempting to build. With the goal to optimize our effectiveness of solving the right problems, over simply becoming a feature factory by continually building solutions.

The steps of the scientific method are to:

  • Make observations
  • Formulate a hypothesis
  • Design an experiment to test the hypothesis
  • State the indicators to evaluate if the experiment has succeeded
  • Conduct the experiment
  • Evaluate the results of the experiment
  • Accept or reject the hypothesis
  • If necessary, make and test a new hypothesis

Using an experimentation approach to software development

We need to challenge the concept of having fixed requirements for a product or service. Requirements are valuable when teams execute a well known or understood phase of an initiative, and can leverage well understood practices to achieve the outcome. However, when you are in an exploratory, complex and uncertain phase you need hypotheses.

Handing teams a set of business requirements reinforces an order-taking approach and mindset that is flawed.

Business does the thinking and ‘knows’ what is right. The purpose of the development team is to implement what they are told. But when operating in an area of uncertainty and complexity, all the members of the development team should be encouraged to think and share insights on the problem and potential solutions. A team simply taking orders from a business owner is not utilizing the full potential, experience and competency that a cross-functional multi-disciplined team offers.

Framing hypotheses

The traditional user story framework is focused on capturing requirements for what we want to build and for whom, to enable the user to receive a specific benefit from the system.

As A…. <role>

I Want… <goal/desire>

So That… <receive benefit>

Behaviour Driven Development (BDD) and Feature Injection  aims to improve the original framework by supporting communication and collaboration between developers, tester and non-technical participants in a software project.

In Order To… <receive benefit>

As A… <role>

When viewing work as an experiment, the traditional story framework is insufficient. As in our high school science experiment, we need to define the steps we will take to achieve the desired outcome. We then need to state the specific indicators (or signals) we expect to observe that provide evidence that our hypothesis is valid. These need to be stated before conducting the test to reduce biased interpretations of the results. 

If we observe signals that indicate our hypothesis is correct, we can be more confident that we are on the right path and can alter the user story framework to reflect this.

Therefore, a user story structure to support Hypothesis-Driven Development would be;

how-implement-hypothesis-driven-development

We believe < this capability >

What functionality we will develop to test our hypothesis? By defining a ‘test’ capability of the product or service that we are attempting to build, we identify the functionality and hypothesis we want to test.

Will result in < this outcome >

What is the expected outcome of our experiment? What is the specific result we expect to achieve by building the ‘test’ capability?

We will know we have succeeded when < we see a measurable signal >

What signals will indicate that the capability we have built is effective? What key metrics (qualitative or quantitative) we will measure to provide evidence that our experiment has succeeded and give us enough confidence to move to the next stage.

The threshold you use for statistically significance will depend on your understanding of the business and context you are operating within. Not every company has the user sample size of Amazon or Google to run statistically significant experiments in a short period of time. Limits and controls need to be defined by your organization to determine acceptable evidence thresholds that will allow the team to advance to the next step.

For example if you are building a rocket ship you may want your experiments to have a high threshold for statistical significance. If you are deciding between two different flows intended to help increase user sign up you may be happy to tolerate a lower significance threshold.

The final step is to clearly and visibly state any assumptions made about our hypothesis, to create a feedback loop for the team to provide further input, debate and understanding of the circumstance under which we are performing the test. Are they valid and make sense from a technical and business perspective?

Hypotheses when aligned to your MVP can provide a testing mechanism for your product or service vision. They can test the most uncertain areas of your product or service, in order to gain information and improve confidence.

Examples of Hypothesis-Driven Development user stories are;

Business story

We Believe That increasing the size of hotel images on the booking page

Will Result In improved customer engagement and conversion

We Will Know We Have Succeeded When we see a 5% increase in customers who review hotel images who then proceed to book in 48 hours.

It is imperative to have effective monitoring and evaluation tools in place when using an experimental approach to software development in order to measure the impact of our efforts and provide a feedback loop to the team. Otherwise we are essentially blind to the outcomes of our efforts.

In agile software development we define working software as the primary measure of progress.

By combining Continuous Delivery and Hypothesis-Driven Development we can now define working software and validated learning as the primary measures of progress.

Ideally we should not say we are done until we have measured the value of what is being delivered – in other words, gathered data to validate our hypothesis.

Examples of how to gather data is performing A/B Testing to test a hypothesis and measure to change in customer behaviour. Alternative testings options can be customer surveys, paper prototypes, user and/or guerrilla testing.

One example of a company we have worked with that uses Hypothesis-Driven Development is  lastminute.com . The team formulated a hypothesis that customers are only willing to pay a max price for a hotel based on the time of day they book. Tom Klein, CEO and President of Sabre Holdings shared  the story  of how they improved conversion by 400% within a week.

Combining practices such as Hypothesis-Driven Development and Continuous Delivery accelerates experimentation and amplifies validated learning. This gives us the opportunity to accelerate the rate at which we innovate while relentlessly reducing cost, leaving our competitors in the dust. Ideally we can achieve the ideal of one piece flow: atomic changes that enable us to identify causal relationships between the changes we make to our products and services, and their impact on key metrics.

As Kent Beck said, “Test-Driven Development is a great excuse to think about the problem before you think about the solution”. Hypothesis-Driven Development is a great opportunity to test what you think the problem is, before you work on the solution.

How can you achieve faster growth?

Back to blog home

How to apply hypothesis-driven development, the statsig team.

Ever wondered how to streamline your software development process to align more closely with actual user needs and business goals?

Hypothesis-Driven Development (HDD) could be the answer, blending the rigor of the scientific method with the creativity of engineering. This approach not only accelerates development but also enhances the precision and relevance of the features you deploy.

HDD isn't just a fancy term, it's a structured methodology that transforms guessing in product development into an evidence-based strategy. By focusing on hypotheses, you can make clearer decisions and avoid the common pitfalls of assumption-based approaches.

Here's how you can apply this method to boost your team's efficiency and product success.

Introduction to hypothesis-driven development

Hypothesis-Driven Development (HDD) applies the scientific method to software engineering, fostering a culture of experimentation and learning. Essentially, it involves forming a hypothesis about a feature's impact, testing it in a real-world scenario, and using the results to guide further development. This method helps teams move from "we think" to "we know," ensuring that every feature adds real value to the product.

Benefits of HDD include:

Improved accuracy: By testing assumptions, you ensure that only the features that truly meet user needs and drive business goals make it to production.

Enhanced team agility: HDD allows teams to adapt quickly based on empirical data, making it easier to pivot or iterate on features.

Adopting HDD means shifting from a feature-focused to a results-focused mindset, a change that can significantly enhance both the development process and the end product. By integrating hypothesis testing into your workflow, you not only build better software but also foster a more knowledgeable and agile development team.

Setting the stage for HDD

Defining clear, testable hypotheses before starting the development process is crucial. This ensures that every feature developed serves a specific, measurable goal. Remember, a well-defined hypothesis sets the stage for meaningful experimentation and impactful results.

User feedback and data analysis play pivotal roles in shaping these hypotheses. You gather insights directly from your users and analyze existing data to hypothesize what changes might improve your product. This approach ensures that your development efforts align closely with user needs and expectations.

For example, feature flagging allows you to test hypotheses in production environments without disrupting the user experience. This method provides real-time feedback and data to refine your hypotheses further.

Designing effective experiments

Selecting relevant metrics and establishing control groups are key components in designing experiments. You need metrics that directly reflect the changes hypothesized. Establishing a control group ensures that any observed changes are due to the modification and not external variables.

Utilizing tools like feature flags ensures that your experiments are both scalable and repeatable. Feature flags allow you to manage who sees what feature and when, making it easier to roll out changes incrementally. This approach minimizes risk and provides flexibility in testing.

Techniques for scalability and repeatability :

Use feature flags to segment user groups and roll out changes selectively.

Ensure data consistency across tests by using standardized data collection methods.

Automate the deployment and rollback processes to react quickly to experiment results.

By following these strategies, you can ensure that your hypothesis-driven experiments yield valuable insights and drive product improvements effectively.

Implementing experimentation at scale

Tools and platforms like Statsig enhance hypothesis-driven development by enabling feature flagging and experimentation. These tools integrate into your development workflows seamlessly. They provide a robust framework for managing experiments without disrupting existing processes.

Seamless integration into development workflows involves several steps:

Automate the setup process : Tools should easily integrate with your CI/CD pipelines.

Use APIs for customization : Flexible APIs allow you to tailor experiments to your specific needs ( learn more about API integration ).

Leverage dashboard features : Platforms offer dashboards for real-time results monitoring, which assists in quick decision-making.

By adopting these tools, you ensure that experimentation scales with your application's growth and complexity. This approach supports continuous improvement and helps you make data-driven decisions efficiently.

Analyzing experiment results

Analyzing data post-experiment is crucial to determining the success or failure of your hypothesis. You begin by gathering and segmenting the data collected during the experiment phase. Use statistical tools to analyze these data sets for patterns or significant outcomes.

Understanding statistical significance plays a pivotal role in hypothesis-driven development (HDD). This involves determining whether the results observed are due to the changes made or random variations:

Perform a t-test or use a p-value to assess the significance.

Ensure the sample size is adequate to justify the results.

These methods guide your decision-making process, indicating whether to adopt, iterate, or discard the tested hypothesis. Effective analysis not only confirms the validity of your hypothesis but also enhances the reliability of your development process.

Learning from success and failure

Documenting outcomes is essential, whether your experiments succeed or fail. Start by creating a structured template that captures key metrics, observations, and the conditions under which the experiment ran. This practice ensures that you maintain a historical data repository which can guide future hypotheses and prevent repetitive failures.

Learning from both success and failure sharpens your hypothesis-driven development skills. For successes, document what worked and why, linking outcomes to specific actions or changes. For failures, identify missteps and misunderstood variables to refine future experiments. This continuous documentation feeds into a knowledge base that becomes a valuable resource for your team.

Iterating and integrating feedback enhance product development progressively. Incorporate lessons from each experiment into the next cycle of hypothesis formulation and testing. This approach, highlighted in discussions about good engineering culture , fosters a dynamic environment where improvements are continual and responsive to user feedback.

By embracing these practices, you ensure that your development process remains agile, informed, and increasingly effective over time.

Closing thoughts

Hypothesis-Driven Development offers a powerful framework for aligning software development with user needs and business objectives. By embracing experimentation, data-driven decision making, and continuous learning, teams can create products that truly resonate with their target audience.

While adopting HDD requires a shift in mindset and the right tools, the benefits it brings in terms of improved accuracy, agility, and user satisfaction make it a worthwhile investment for any software development organization.

Create a free account

Actionable intelligence at your fingertips.

With Statsig Analytics you can get answers in just a few clicks. No queries required.

Build fast?

Try statsig today.

hypothesis driven development example

Recent Posts

Experiment scorecards: essentials and best practices.

An experiment scorecard is more than just a collection of numbers; it's a narrative of your experiment's journey from hypothesis to conclusion.

What's the difference between Statsig and PostHog?

Statsig and PostHog both offer suites of tools that help builders be more data-driven in how they develop products, but how do they differ?

Intro to product analytics

Product analytics reveals user interactions, driving informed decisions, enhancing UX, and boosting business outcomes.

Announcing Statsig Web Analytics with Autocapture

Statsig Web Analytics with Autocapture enables instant tracking of core website metrics and is ideal for startups aiming to drive growth with data.

How to improve funnel conversion

Conversion funnels represent users' flow toward a specific objective or behavior. Regardless of what those are, there are tried-and-true methods to increase conversion rates.

Intro to flicker effect in A/B testing

The flicker effect in A/B testing refers to the visible and abrupt changes that occur on a webpage as it loads or when certain conditions trigger content updates.

Stratechi.com

  • What is Strategy?
  • Business Models
  • Developing a Strategy
  • Strategic Planning
  • Competitive Advantage
  • Growth Strategy
  • Market Strategy
  • Customer Strategy
  • Geographic Strategy
  • Product Strategy
  • Service Strategy
  • Pricing Strategy
  • Distribution Strategy
  • Sales Strategy
  • Marketing Strategy
  • Digital Marketing Strategy
  • Organizational Strategy
  • HR Strategy – Organizational Design
  • HR Strategy – Employee Journey & Culture
  • Process Strategy
  • Procurement Strategy
  • Cost and Capital Strategy
  • Business Value
  • Market Analysis
  • Problem Solving Skills
  • Strategic Options
  • Business Analytics
  • Strategic Decision Making
  • Process Improvement
  • Project Planning
  • Team Leadership
  • Personal Development
  • Leadership Maturity Model
  • Leadership Team Strategy
  • The Leadership Team
  • Leadership Mindset
  • Communication & Collaboration
  • Problem Solving
  • Decision Making
  • People Leadership
  • Strategic Execution
  • Executive Coaching
  • Strategy Coaching
  • Business Transformation
  • Strategy Workshops
  • Leadership Strategy Survey
  • Leadership Training
  • Who’s Joe?

“A fact is a simple statement that everyone believes. It is innocent, unless found guilty. A hypothesis is a novel suggestion that no one wants to believe. It is guilty until found effective.”

– Edward Teller, Nuclear Physicist

During my first brainstorming meeting on my first project at McKinsey, this very serious partner, who had a PhD in Physics, looked at me and said, “So, Joe, what are your main hypotheses.” I looked back at him, perplexed, and said, “Ummm, my what?” I was used to people simply asking, “what are your best ideas, opinions, thoughts, etc.” Over time, I began to understand the importance of hypotheses and how it plays an important role in McKinsey’s problem solving of separating ideas and opinions from facts.

What is a Hypothesis?

“Hypothesis” is probably one of the top 5 words used by McKinsey consultants. And, being hypothesis-driven was required to have any success at McKinsey. A hypothesis is an idea or theory, often based on limited data, which is typically the beginning of a thread of further investigation to prove, disprove or improve the hypothesis through facts and empirical data.

The first step in being hypothesis-driven is to focus on the highest potential ideas and theories of how to solve a problem or realize an opportunity.

Let’s go over an example of being hypothesis-driven.

Let’s say you own a website, and you brainstorm ten ideas to improve web traffic, but you don’t have the budget to execute all ten ideas. The first step in being hypothesis-driven is to prioritize the ten ideas based on how much impact you hypothesize they will create.

hypothesis driven example

The second step in being hypothesis-driven is to apply the scientific method to your hypotheses by creating the fact base to prove or disprove your hypothesis, which then allows you to turn your hypothesis into fact and knowledge. Running with our example, you could prove or disprove your hypothesis on the ideas you think will drive the most impact by executing:

1. An analysis of previous research and the performance of the different ideas 2. A survey where customers rank order the ideas 3. An actual test of the ten ideas to create a fact base on click-through rates and cost

While there are many other ways to validate the hypothesis on your prioritization , I find most people do not take this critical step in validating a hypothesis. Instead, they apply bad logic to many important decisions . An idea pops into their head, and then somehow it just becomes a fact.

One of my favorite lousy logic moments was a CEO who stated,

“I’ve never heard our customers talk about price, so the price doesn’t matter with our products , and I’ve decided we’re going to raise prices.”

Luckily, his management team was able to do a survey to dig deeper into the hypothesis that customers weren’t price-sensitive. Well, of course, they were and through the survey, they built a fantastic fact base that proved and disproved many other important hypotheses.

business hypothesis example

Why is being hypothesis-driven so important?

Imagine if medicine never actually used the scientific method. We would probably still be living in a world of lobotomies and bleeding people. Many organizations are still stuck in the dark ages, having built a house of cards on opinions disguised as facts, because they don’t prove or disprove their hypotheses. Decisions made on top of decisions, made on top of opinions, steer organizations clear of reality and the facts necessary to objectively evolve their strategic understanding and knowledge. I’ve seen too many leadership teams led solely by gut and opinion. The problem with intuition and gut is if you don’t ever prove or disprove if your gut is right or wrong, you’re never going to improve your intuition. There is a reason why being hypothesis-driven is the cornerstone of problem solving at McKinsey and every other top strategy consulting firm.

How do you become hypothesis-driven?

Most people are idea-driven, and constantly have hypotheses on how the world works and what they or their organization should do to improve. Though, there is often a fatal flaw in that many people turn their hypotheses into false facts, without actually finding or creating the facts to prove or disprove their hypotheses. These people aren’t hypothesis-driven; they are gut-driven.

The conversation typically goes something like “doing this discount promotion will increase our profits” or “our customers need to have this feature” or “morale is in the toilet because we don’t pay well, so we need to increase pay.” These should all be hypotheses that need the appropriate fact base, but instead, they become false facts, often leading to unintended results and consequences. In each of these cases, to become hypothesis-driven necessitates a different framing.

• Instead of “doing this discount promotion will increase our profits,” a hypothesis-driven approach is to ask “what are the best marketing ideas to increase our profits?” and then conduct a marketing experiment to see which ideas increase profits the most.

• Instead of “our customers need to have this feature,” ask the question, “what features would our customers value most?” And, then conduct a simple survey having customers rank order the features based on value to them.

• Instead of “morale is in the toilet because we don’t pay well, so we need to increase pay,” conduct a survey asking, “what is the level of morale?” what are potential issues affecting morale?” and what are the best ideas to improve morale?”

Beyond, watching out for just following your gut, here are some of the other best practices in being hypothesis-driven:

Listen to Your Intuition

Your mind has taken the collision of your experiences and everything you’ve learned over the years to create your intuition, which are those ideas that pop into your head and those hunches that come from your gut. Your intuition is your wellspring of hypotheses. So listen to your intuition, build hypotheses from it, and then prove or disprove those hypotheses, which will, in turn, improve your intuition. Intuition without feedback will over time typically evolve into poor intuition, which leads to poor judgment, thinking, and decisions.

Constantly Be Curious

I’m always curious about cause and effect. At Sports Authority, I had a hypothesis that customers that received service and assistance as they shopped, were worth more than customers who didn’t receive assistance from an associate. We figured out how to prove or disprove this hypothesis by tying surveys to transactional data of customers, and we found the hypothesis was true, which led us to a broad initiative around improving service. The key is you have to be always curious about what you think does or will drive value, create hypotheses and then prove or disprove those hypotheses.

Validate Hypotheses

You need to validate and prove or disprove hypotheses. Don’t just chalk up an idea as fact. In most cases, you’re going to have to create a fact base utilizing logic, observation, testing (see the section on Experimentation ), surveys, and analysis.

Be a Learning Organization

The foundation of learning organizations is the testing of and learning from hypotheses. I remember my first strategy internship at Mercer Management Consulting when I spent a good part of the summer combing through the results, findings, and insights of thousands of experiments that a banking client had conducted. It was fascinating to see the vastness and depth of their collective knowledge base. And, in today’s world of knowledge portals, it is so easy to disseminate, learn from, and build upon the knowledge created by companies.

NEXT SECTION: DISAGGREGATION

DOWNLOAD STRATEGY PRESENTATION TEMPLATES

THE $150 VALUE PACK - 600 SLIDES 168-PAGE COMPENDIUM OF STRATEGY FRAMEWORKS & TEMPLATES 186-PAGE HR & ORG STRATEGY PRESENTATION 100-PAGE SALES PLAN PRESENTATION 121-PAGE STRATEGIC PLAN & COMPANY OVERVIEW PRESENTATION 114-PAGE MARKET & COMPETITIVE ANALYSIS PRESENTATION 18-PAGE BUSINESS MODEL TEMPLATE

JOE NEWSUM COACHING

Newsum Headshot small

EXECUTIVE COACHING STRATEGY COACHING ELEVATE360 BUSINESS TRANSFORMATION STRATEGY WORKSHOPS LEADERSHIP STRATEGY SURVEY & WORKSHOP STRATEGY & LEADERSHIP TRAINING

THE LEADERSHIP MATURITY MODEL

Explore other types of strategy.

BIG PICTURE WHAT IS STRATEGY? BUSINESS MODEL COMP. ADVANTAGE GROWTH

TARGETS MARKET CUSTOMER GEOGRAPHIC

VALUE PROPOSITION PRODUCT SERVICE PRICING

GO TO MARKET DISTRIBUTION SALES MARKETING

ORGANIZATIONAL ORG DESIGN HR & CULTURE PROCESS PARTNER

EXPLORE THE TOP 100 STRATEGIC LEADERSHIP COMPETENCIES

TYPES OF VALUE MARKET ANALYSIS PROBLEM SOLVING

OPTION CREATION ANALYTICS DECISION MAKING PROCESS TOOLS

PLANNING & PROJECTS PEOPLE LEADERSHIP PERSONAL DEVELOPMENT

sm icons linkedIn In tm

Hypothesis-Driven Development

Hypothesis-Driven Development (HDD) is a software development approach rooted in the philosophy of systematically formulating and testing hypotheses to drive decision-making and improvements in a product or system. At its core, HDD seeks to align development efforts with the goal of discovering what resonates with users. This philosophy recognizes that assumptions about user behavior and preferences can often be flawed, and the best way to understand users is through experimentation and empirical evidence.

In the context of HDD, features and user stories are often framed as hypotheses. This means that instead of assuming a particular feature or enhancement will automatically improve the user experience, development teams express these elements as testable statements. For example, a hypothesis might propose that introducing a real-time chat feature will lead to increased user engagement by facilitating instant communication.

The Process

The process of Hypothesis-Driven Development involves a series of steps. Initially, development teams formulate clear and specific hypotheses based on the goals of the project and the anticipated impact on users. These hypotheses are not merely speculative ideas but are designed to be testable through concrete experiments.

Once hypotheses are established, the next step is to design and implement experiments within the software. This could involve introducing new features, modifying existing ones, or making adjustments to the user interface. Throughout this process, the emphasis is on collecting relevant data that can objectively measure the impact of the changes being tested.

Validating Hypotheses

The collected data is then rigorously analyzed to determine the validity of the hypotheses. This analytical phase is critical for extracting actionable insights and understanding how users respond to the implemented changes. If a hypothesis is validated, the development team considers how to build upon the success. Conversely, if a hypothesis is invalidated, adjustments are made based on the lessons learned from the experiment.

HDD embraces a cycle of continuous improvement. As new insights are gained and user preferences evolve, the development process remains flexible and adaptive. This iterative approach allows teams to respond to changing conditions and ensures that the software is consistently refined in ways that genuinely resonate with users. In essence, Hypothesis-Driven Development serves as a methodology that not only recognizes the complexity of user behavior but actively seeks to uncover what truly works through a structured and empirical approach.

Other Recent Articles

Customer Development

Customer Development

What is a Fractional CPO?

What is a Fractional CPO?

AI for Product Managers

AI for Product Managers

Start building amazing products today.

hypothesis driven development example

Create Beautiful Roadmaps - 14 Day Free Trial!

hypothesis driven development example

Hypothesis-driven development

hypothesis driven development example

If you’ve ever worked on a project for months, quarters or years, only to see underwhelming results when it finally launches, then maybe it’s time to reach into scientific process for a new approach to problem solving

When proposing new engineering projects, it’s tempting to talk in absolutes: to paint the picture of a successful future where happy customers are living their best lives thanks to the completion of our initiative.

But no matter the brilliance of our team, none of us can be 100% sure of the future. If we were, we’d be off buying lottery tickets, not designing database sharding systems. And so it can be helpful to have a way to track our progress over the course of a project, checking that our assumptions and beliefs aren’t being held incorrectly in the face of actual user or system behaviour.

Hypotheses sound like great things. Just the word “hypothesis” – it’s so freakin’ science-y! But often we take our success criteria, call them our hypotheses (‘if we reduce build times, engineers will be happier and more productive’), and start building. It’s my experience that defining and stating good hypotheses at the beginning of design, and measuring the success through a process of incremental, phased development, results in spending less time building systems that are ineffective, and more time learning and understanding our customers.

What makes a good hypothesis?

I spent several years leading a product growth team that ran  a lot  of experiments. That seems like a somewhat obvious statement as the most conspicuous impression of the work of growth teams is ‘you run an a/b test on something, then ship the side that wins’.

However, we ran into some interesting challenges when we were building experiments. In some cases, we would build and run an experiment and see statistically-significant results. But then when the time came to ship, disagreement would break out about whether this result was a “win” or not. If our experiment increased the number of new users trying our service, but they convert to paying customers at a lower rate than average, should we continue to invest in this area, or move onto something new?  It was frustrating that we only really had those conversations  after  the design, build, and execution of an experiment.

At one point, a teammate shared a  blog post by a product manager at Patreon  which I highly recommend.  To quote a couple of key paragraphs:

'A good hypothesis is a statement about what you believe to be true today.  It is not what you think will happen when you try X. It contains neither the words “If” nor “Then.” In fact, it has nothing to do with what you’re about to try  –  it’s all about your users.'

'Why be pedantic about this? Because hypotheses are the key to learning. Product growth doesn’t happen from a few cool tricks. Product growth comes from fumbling around in the dark, trying a lot of things, and improving our aim over the course of months and years. In other words, this is a long game that is ultimately about learning.  Clear learnings come only from clear hypotheses.'

Setting hypotheses

Defining the hypotheses in the form of ‘we believe  X  because  Y ’ is a crucial act of framing. It sets the stage for the work to be done. In most cases, we found them to be fairly uncontroversial. To give a specific example from my growth team:

‘We believe that new users have trouble discovering both basic and advanced functionality because user testing shows that much of it is hidden from discovery and not mentioned during normal use of the product.’

The hypothesis is clear and backed up with evidence. The goal with a hypothesis is that anyone reviewing can give feedback on whether or not they buy into this statement. There’s nothing here about priority, urgency or what we might build; it’s a statement of belief.

Sometimes we didn’t have strong evidence or pointers to feedback, but we believed something in our guts. That’s OK too. Stating the hypothesis allows it to be challenged and tested. When we were planning our Sprints, a loosely-held belief might get more pushback from the team or our leadership, but this allowed us to have those conversations early in the process, rather than once we were already looking at the results of a built experiment.

Predictions

Once we had our hypotheses, we wrote predictions, and these varied in detail. If we had already run experiments against the hypothesis in previous Sprints and therefore had a higher confidence, our predictions were quite prescriptive. But other times they merely set the goalposts for the team to design towards.

The goal with a prediction is to use it to define the  smallest possible  piece of work that could be built and produce a learning. If a prediction holds true, the hypothesis lives to fight another day, and we can build its next test with confidence.

Here are a couple of examples of predictions we tested.

The prediction here is very specific to the hypothesis being tested. Importantly, it doesn’t talk about the  risk  of this change (that removing banners may  negatively  affect other metrics).  For this experiment we want to learn one thing: does this hypothesis stand scrutiny? And can we use the results to further experiment and iterate on this experience in later Sprints?

I think this pairing was interesting because the prediction on its own wouldn’t be an interesting experiment to run. (Yes, of course if you nudge users to do something, they’ll be more likely to do it. Duh!) But paired with the hypothesis, there is a clearer picture of why it matters, and (depending on the results) it gave us guidance to continue experimenting along this line of thinking.

This also illustrates a redefinition of “success” for the team’s work. The prediction is non-prescriptive about what the design of the user experience should be, and there are a variety of experiments that could test this prediction. Quickly learning the validity of the hypothesis with a small experiment gave us valuable data to gain confidence in investing in a more permanent solution.

What success looks like

Our measure of success as a team soon became ‘building and shipping something in a timely manner that helped us learn more about our users and iterate’. But success doesn't have to mean getting a perfect, long-lived solution that never needs tweaking. Success doesn’t even have to be that we were right with our hypothesis. In fact, some of the most rewarding projects have been the ones where we spent a few weeks building and testing something that completely blew our hypothesis apart, as then my reaction was ‘cool, now we know not to spend any more time going down that path!’

But each experiment’s success helped the team stay focused on the projects that would move our product to deliver our business goals, without over-investing in work that didn’t move the needle.

Taking this beyond growth teams

Since moving on from that growth team, I’ve continued to adapt and evolve this hypothesis/prediction approach and can now apply it to different engineering problems. It turns out that no team suffers from getting people aligned early in a project with short, clear definitions of success. Here are some examples of the framework in different situations.

  • If your team is starting a new development of a new product, engineers can build scaled-down, hacky prototypes to test key user behaviours before the team invests in a longer-term roadmap.
  • If your team is facing challenges around mobile app stability, you can pinpoint the opportunities for improvement, and prioritize and parallelize different approaches to fixing crashes.
  • If your team is attempting to improve website performance, you can test and  measure small, reversible changes on individual pages before you invest in switching bundling tools or building asset optimization flows.

My favourite observation from adopting this approach has been seeing increased participation from all levels of engineers in suggesting potential projects. When we’ve kept discussion focused on ‘what are the simple ways to test the hypotheses?’ it’s easier for more-junior engineers to confidently throw out suggestions.

So if you’re in a position where you’re planning future work for your team, I encourage you to take a step back, look at your goals, and ask yourself ‘what are the things we believe, and what are the cheapest ways to test whether those beliefs hold up to scrutiny?’ It’s the way scientists have been working for centuries, and it might help you and your team avoid costly mistakes.

Related content

6 managing up techniques for senior ics.

Addy Osmani

Mastering tough technical decisions

Kimberly Lowe-Williams

Balancing build vs buy decisions in a post-boom world

Josh Fruhlinger

Navigating complexity as an engineering leader

	Rohit Kumar

Why business context is important for technical decision-making

	Kevin Ball

Why the most advanced technology solution isn’t always the correct one

Kevin Stewart

CONTENT SPONSORED by JELLYFISH

How to scale decision-making in your organization with Circles

Bernhard Schandl

How to drive decisions as an engineering leader

Sarah Drasner

Five steps for making data-informed decisions

Caitlin Hudon

CONTENT SPONSORED by SLACK

How enterprise organizations can respond when engineering goes wrong

laura nolan

Outages are coming for you

Matthew Hawthorne

Decision Making for Software Engineering Teams: Francesco Strazzullo in conversation

francesco strazzullo

Every decision creates a policy

Kevin Goldsmith

CONTENT SPONSORED by LAUNCHDARKLY

Learnings from 'Weighing up the pros and cons of build vs. buy'

Ellie Spencer-Failes

Leveling up your machine learning product by taking the leap to third-party

Andrea Heyman

CONTENT SPONSORED by ROLLBAR

How to make a build vs. buy decision for a new software solution

Michael Ng

Guiding principles for build vs. buy decisions

Aaron Suggs

Addressing the challenges of build vs. buy

Dawn Parzych

The search for the best solution: third party or open source?

Tal Joffe

140,000 lines of code: why we built our own licensing system

Tobias Günther

Utilizing invisible forces to make better decisions in software design

Murat Derya Ozen

Build vs. buy: know what you need, and know what you want

Chris Powell

Ten things you need to know before making a build vs. buy decision

The long tail of product features.

Arjen de Ruiter

Abandoning the build: when investing is the only way to scale

Josh Barr

To build, or to buy, that is the question

Wouter Van Geluwe

Empowering your engineering team with an effective decision-making process

Cate Huston

Software platforms: DIY vs. buying it

Leading your engineering team through an unexpected product pivot.

Najla Elmachtoub

Managing expectations on time estimates with probabilistic forecasting

Daiany Palacios

CONTENT SPONSORED by PEPSICO

Assign problems (not work) to your teams to build extraordinary products

Raphaela Wrede

Strategies for making impossible decisions

Scott Triglia

Using an 'architectural North Star' to align your engineering team with your organization

Hanjie Ji

Driving architecture alignment across a fully-distributed engineering workforce

Iccha Sethi

The benefits of delivering imperfect software

stevi-deter

How to Design Systems and Processes Teams Actually Follow

Jason Lengstorf

Strategies for reducing the fragility of your systems

Maggie Zhou

How to estimate as as an engineering leader

Dominika Rogala

Dealing with overheads as an engineering leader

Penny Wyatt

Seven key considerations in early development

Heidi Waterhouse

Plug in to LeadDev

Want to get regular updates on all things LeadDev? Sign up to our mailing list

To find out more about how your data is handled check out our Data Promise

hypothesis driven development example

Hypothesis Driven Development for AI Products

Leveraging empirical science processes to deliver engineering projects

April 14, 2024

TL;DR: This is the Diagram that summarizes the approach.

hypothesis driven development example

Introduction

In regular software work, things usually happen as expected. You find a problem, figure out a solution, and, once implemented, it usually works fine. In machine learning, however, things are less predictable. Instead of writing out every step, we teach machines to learn tasks on their own 1 . But this brings uncertainty. Because they can handle tasks we wouldn’t know how to code directly, we can’t predict the outcome before we try. Even seasoned professionals often encounter unexpected situations.

Due to this uncertainty, the typical methodology used in software engineering isn’t enough for machine learning projects. We need to add a more scientific approach to our toolkit: formulating hypotheses about potential solutions and then validating them. Only once a solution is proven effective can we trust that it solves our problem. This approach is known as Hypothesis Driven Development . Sometimes it is also referred as Continuous Experimentation .

The aim of this post is to offer guidance on how to implement this approach: a conceptual compass to help navigate the uncertainty while increasing the chances of success. In other words, it’s about maximizing the possibility of creating machine learning-powered products that have an impact in the real world.

To illustrate the process discussed here, we will consider two examples. One from the world of computer vision and the other from the world of Large Language Models:

  • Detect defects in a production line.
  • Answer user questions related to a company’s internal documents.

Hypothesis-driven development is not new. Some teams even use it for projects unrelated to AI, employing it to manage uncertainty 2 . The distinction lies in the type of uncertainty addressed: rather than focusing on whether the software functions correctly, they’re more concerned with how certain product features impact outcomes (like whether a new landing page boost conversions).

Problem Definition

People often start a new project with a rough idea of what they want to achieve. These ideas tend look like the examples introduced above. These definitions might be good high-level goals, but they are too vague to be actionable. We cannot know if we are successful or not. Thus, the first and most important step towards a successful project is to crystallize the problem definition . The situation will improve dramatically if we aim for the following three:

  • Evaluation dataset – Imagine your ideal system as a black box. What inputs would you give it, and what outputs would you expect? Gathering these input-output pairs for all relevant scenarios is essential. Though it may be tedious and time-consuming, it’s arguably the most critical part of the project. Without it, you’re essentially flying blind. Investing time here will pay off ample dividends.
  • Evaluators – Once the system produces outputs based on the evaluation dataset, you need to assess how close these outputs are to the desired ones. Evaluators quantify this closeness by generating metrics from pairs of actual and desired outputs. We may have multiple evaluators if we care about different things. 3
  • Success Criteria – What is the minimum performance we require to trust the system enough to use it in the real world.

After going through the (painful but valuable) process, the illustrative examples now might look like this:

  • Detect if there is any scratch in the iPhone 13 screen before assembling it to the phone. We require at most 0.5% False Negatives and 5% False Positives. We will evaluate using a dataset of 1000 picture-label pairs. Pictures are photos of screens. Labels indicate if the given screen is scratched or not.
  • Given an employee question, fetch the document sections that answer it. We require at least 80% average section recall and at least 30% average section precision. We will evaluate using a dataset of 100 questions and section set pairs.

As you may realize, these problem definitions give us a clear target and a clear way to know if we hit the target.

Task performance metrics aren’t the only considerations; there could be other factors like latency or cost. For example, a perfect prediction that takes 5 minutes wouldn’t be practical on a production line.

The Inner Loop

Once the problem is clear, we can start working on the solution. This is an iterative process that is often referred as the Inner Loop because there is no live interaction with the outside world. The steps are the following:

  • Formulate a hypothesis – What can we try that might improve the results and move us closer to our goal? Looking at the results of the previous iteration, reading relevant material and discussing with colleagues are always safe bets to come up with new ideas.
  • Run an experiment – Develop the artifacts to validate the hypothesis. This usually includes code and/or data. We may need to train a model or inject context to a Large Language Model prompt. If we need to do so, we will need data (and it cannot be our evaluation set). Thus, while not discussed explicitly in this post, there is usually a need for a data engine to ingest, process and manage data.
  • Evaluate the results – We take the inputs of our evaluation set and pass them through the artifacts of our experiment to obtain outputs. Then, we feed these outputs paired with the desired outputs to our evaluators to obtain the offline metrics .
  • Decide – If the results indicate improvement over the previous iteration, integrate the experiment’s work into the solution. If not, document the lessons learned and archive the work. At this point, we may choose to exit the inner loop and deploy the current solution to the real world or return to step 1 to formulate a new hypothesis and continue refining the system.

A good analogy for this way of working is to consider it the Test-Driven Development for Machine Learning. The Problem Definition defines the test we need to pass, and the Inner Loop are our efforts to accomplish that.

In that same direction, investing in infrastructure to enable fast iteration and fast feedback loops is usually a good idea. The more ideas we can try per unit of time, the highest the chances we find the right one.

The Baseline

When we enter the loop for the first time, we have nothing to iterate upon. Thus, we define our first hypothesis: the baseline . The goal of a baseline is not to solve the problem, but to allow us to start the iterative improvement. Thus, we prioritize simplicity and speed of implementation. Sensible baselines for our examples could be:

  • If average pixel intensity deviates more than 10% of average pixel intensity for non-scratched screens, label the picture as scratched.
  • Given a user question, retrieve the paragraphs that contain all the words in the user question (after removing the stop words).

The Outer Loop

Once our solution meets the success criteria we defined in the beginning, we may enter the Outer Loop for the first time. This process does interact with the outside world (e.g., users), hence the outer. It consists of the following steps:

  • Deploy – With what we believe is a functional solution, it’s time to introduce it to the real world so it becomes a product . Note that deploying or not is usually a business decision. Besides performance, other factors may come into play.
  • Observe and monitor – Deployment marks the real test. We must ensure mechanisms are in place to track real-world interactions. This includes logging inputs, outputs, and user interactions. Sufficient data should be collected to accurately reconstruct system behavior, often referred to as traces .
  • Digest – Always process what happens to the deployed system. This may involve manual inspection of data or labeling subsets for online metrics . Confidence in performance alignment with offline metrics is crucial.
  • Decide – If real-world performance meets success criteria, you have two options:
  • Enter maintenance mode: Take no further action unless performance degrades.
  • Revisit your problem definition to be more ambitious in your success criteria or in the desired scope.

If performance falls short, it indicates flaws in the problem definition. This may involve updating the evaluation dataset, revisiting evaluators, or redefining success criteria. After updating the problem definition, re-enter the Inner Loop.

Deploying directly to production can be risky, as a faulty product could damage reputation or incur losses. However, the first deployment often yields valuable insights. Strategies to mitigate risks and gather learnings without significant impact are recommended. These strategies also signal readiness for full deployment, such as:

  • Shadow mode deployment: Run the model alongside existing systems without using its predictions, allowing for comparison.
  • Alpha version rollout: Deploy to a subset of users aware they’re using a trial version, encouraging feedback.

The Diagram

Recommended practices.

While there are many ways to skin the proverbial cat, in my experience there are a few (interrelated) practices that maximize the chances of success:

  • Iterate small and iterate frequently - These endeavors are plagued with uncertainty. Each step teaches us something. If we walk in small steps, we will rarely walk in the wrong direction for long.
  • Strive for full traceability – Hypotheses and experiments often number in the tens or even hundreds. Establishing infrastructure to track the origin of each result—both code and data—proves invaluable. If you cannot effectively reason about every result, you will get confused quickly. Tools like mlflow help on this front.
  • Write experiment documents – Similar to lab notebooks in science, keeping track of what was tried, why, what was expected and what ultimately happened is extremely valuable. Formalizing our thoughts in writing helps reflect on them and ground ourselves. Moreover, this practice streamlines sharing insights among team members and for future reference.
  • Build a leaderboard – Every project has stakeholders. At the very least, the developers are their first stakeholders. A centralized place where each experiment is displayed with its metrics helps demonstrate progress over time and can help boost morale and secure funding.

Closing Thoughts

While things are presented here somewhat linearly, the reality is often messier. It is hard to get the problem definition right on the first try. As you work on the problem, you discover things you did not anticipate. You are forced to revisit your assumptions and reformulate your goals. You may have to scrap big. You may decide to take a calculated risk and deviate from the standard path. Maybe relax success criteria to explore early product interest. All that is okay. That is just business as usual in the realm of AI. If you make some progress every day, there is a solid chance you will reach a valuable destination, even if not the one expected initially. Embrace uncertainty and enjoy the journey.

Thanks to Chris Hughes , Patrick Schuler and Marc Gomez for reviewing this article.

An interesting framing of this is Karpathy’s Software 2.0 ↩︎

This Thoughtworks article is a good introduction. ↩︎

Your AI Product Needs Evals is a good deeper dive into evaluators. ↩︎

  • Product Development
  • Ux Consulting
  • Application Development
  • Application Modernization
  • Quality Assurance Services

Migrating application and databases to the cloud, moving from legacy technologies to a serverless platform for a FinTech organization.

Product-Engineering

  • Cloud Services
  • Devops Services
  • Cloud Migration
  • Cloud Optimization
  • Cloud Computing Services
  • Kubernetes Consulting Services

Building end-to-end data engineering capabilities and setting up DataOps for a healthcare ISV managing sensitive health data.

Cloud-Engineering

  • Big Data Services
  • AI Consulting

Setting up a next-gen SIEM system, processing PB scale data with zero lag, and implementing real-time threat detection.

Data-Machine-Learning

  • IoT Consulting
  • IoT App Development

Building a technically robust IoT ecosystem that was awarded the best implementation in Asia Pacific for a new age IoT business.

Internet-of-Things

  • Innovation Lab as a Service

Establishing an Innovation Lab for the world’s largest Pharma ISV, accelerating product innovation & tech research while ensuring BaU.

Innovation-Lab-as-a-Service

Find out why Cuelogic, a world-leading software product development company, is the best fit for your needs. See how our engineering excellence makes a difference in the lives of everyone we work with.

about us

Discover how Cuelogic is as a global software consultancy and explore what makes us stand apart.

Culture

Read about our free and open culture, a competitive edge that helps clients and employees thrive.

Current Openings

Want to join us? Search current openings, check out the recruitment process, or email your resume.

  • Case Studies
  • Tell Us Your Project   ❯

Hypothesis-driven Development 

blog

Home > Hypothesis-driven Development 

An IT project typically involves a few definite stages. A team has an idea of what would be valuable to the end-user. The same team designs proposed solutions for the product implements the ideas, tests the ideas, deploys them, and eventually, the final product reaches the customer. Ideally, the customer likes the product, uses it, and benefits from it. However, the product team does not immediately have a clear view of the customer experience. 

This is where Hypothesis-driven Development (HDD) comes into the picture. 

Hypothesis-driven development is about solving the right problem at the right time. It ensures that understanding of the problem is verified before actual work on the solution begins.  

What is Hypothesis-driven Development? 

Hypothesis-driven Development (HDD) involves the development of new products and services through iterative experiments on hypotheses. The results of the experiments help to decide whether an expected outcome will be achieved. The steps are repeated till a desirable outcome is reached or the idea is deemed to not be viable anymore. 

HDD advocates experimentation over detailed planning, client feedback over instinct, and iterative design over conventional “big design up front” deliveries.  

With HDD, solutions are viewed as hypotheses, which can be theories about an area relevant to the problem statement. Hypotheses can be developed about areas like the market being targeted, the performance of a business model, the performance of code or the customer’s preferred way of interacting with the system. 

As the software development industry has matured, teams now have the opportunity to leverage capabilities like Continuous Design and Delivery to maximize their potential to learn quickly what works and what does not. By taking an experimental approach to information discovery, teams can more rapidly test their solutions against the problems they have identified in the products or services they are attempting to build. 

What does Hypothesis-driven Development Hope to Achieve?

The goal of HDD is to improve the efficacy of teams, by ensuring they solve correctly-identified problems, rather than continually building low-impact solutions. In HDD, teams focus on user outcomes, rather than their outputs. 

What led to the Popularity of Hypothesis-driven Development?

Since every software company began competing to deliver better software, faster, development teams faced two major challenges: 

1. Delivering functionality  fast , before the features became outdated 

2. Developing features for a more  selective user-base,  that demanded a better experience in every possible way. 

3. Multiple  competing options  for a user to switch loyalties to 

In today’s fast-paced world, teams need to be ready to adapt continuously and quickly, even if it may require deviating far from the originally chosen path.  

The conventional approach of researching and documenting user requirements, developing a solution “to spec” and deploying it months or years later can no longer be considered suitable. 

Requirements add value when teams are executing a well-known or understood phase of an initiative and can leverage previously used practices to achieve the outcome. However, when teams are in an exploratory phase of the product’s development they should use hypotheses to validate ideas first. 

The adoption of this ‘deliver early and fail fast’ approach has become so commonplace, that the word ‘pivot’ is commonly recognized to mean the act of rapidly changing from one plan to another. 

Why Is HDD Important? 

Handing teams a set of business requirements does not allow the developers the freedom of innovation. It implies that the business team is in charge of the problem as well as the solution. The development team merely implements what they are told.  

However, when developing a new product or feature, the entire team should be encouraged to share their insights on the problem and potential solutions. A development team that merely takes orders from a business owner is not utilizing the full potential, experience, and competency that a cross-functional multi-disciplined team offers. 

Key Steps in Product Development 

It is crucial to lay down the foundational steps in HDD product development. The following four steps are integral to the process: 

steps in hypothesis development

1. Finding the Right Problem  

Teams can make use of the ‘persona hypothesis’ and ‘JTBD/Job To Be Done hypothesis’ to ensure they have identified the right problem. The ‘persona hypothesis’ focuses on the persona, which is a humanized and detailed view of who the user is, and what motivates their actions. To aid with creating the persona, the team usually follows an ‘interview guide’ to ensure they can gather sufficient information about the users they are solving problems for. 

The second type of hypothesis that can aid in identifying the right problem is the ‘JTBD hypothesis’. This hypothesis tries to understand the tasks (or jobs) that customers are trying to achieve in their lives. This framework is foundational for understanding what motivates customers and why customers behave the way they do. 

2. Identifying the Demand for the Solution

This area is pivotal and easy to assess: have you validated that you have something for your audience that they prefer over the alternatives. Teams use the ‘demand value hypothesis’ to aid them with identifying demand. The ‘demand/value hypothesis’ is a hypothesis that contains the exact value that would be given to potential clients. 

3. Finding the Right Solution to the Problem

By making use of the ‘usability hypothesis’, teams can assess whether they have found the right solution to the problem. The ‘usability hypothesis’ helps to determine how easy-to-use the designed solutions are. Simpler solutions are more likely to be adopted by more users.

4. Achieving Continuous Delivery while Deploying the Solution  

By delivering enhanced products to users quickly, teams have the opportunity to learn faster. Teams make use of the ‘functional hypothesis’ in continuous delivery pipelines, to make sure the delivered products provide the expected results. 

How Hypothesis-driven Development Works

HDD is a scientific approach to product development . In HDD, teams make observations about customers, to come up with hypotheses or explanations, which they believe align with their customers’ views. Each hypothesis is then tested by predicting an outcome based on it – if the outcome is aligned with the prediction, the hypothesis can be assumed to be correct. 

The key outcome of this experiment-based approach is a better understanding of the system and desired outcomes. If the results of the hypothesis are not as expected, deductions can be made about how to refine the hypothesis further. 

The experiment at the heart of every iteration of HDD must have a quantifiable conclusion and must contribute to gaining more information about the end-users usage of the product. For each experiment, the following steps must take place: 

hypothesis driven development example

  • Make observations about the user 
  • Define a hypothesis 
  • Define an experiment to assess the hypothesis 
  • Decide upon success criteria (e.g., a 30% increase in successful transactions) 
  • Conduct the experiment 
  • Evaluate the results of the experiment 
  • Accept or reject the hypothesis 
  • If required, design and assess a new hypothesis 

Product development teams can leverage Continuous Design and Delivery to deliver changes related to their hypothesis for real-time testing of their theories.  

How is a Good Hypothesis Framed? 

There is a framework that should be followed to define a clear hypothesis. 

The framework supports communication and collaboration between developers, testers as well as non-technical stakeholders in the product. 

It is of the following format: 

We believe  <this capability>  

‘This capability defines the functionality that is being developed. The success of the feature is used to test the hypothesis  

Will result in  <this outcome>  

‘This outcome’ defines the clear objective for the development of the capability/feature. Teams identify a tangible goal to measure the usefulness of the feature being developed.  

We will have the confidence to proceed when  <we see a measurable signal>  

The measurable signal refers to the key metrics (which can be qualitative or quantitative) that can be measured to provide evidence that the experiment has succeeded.  

An effective hypothesis is fundamental to data-driven optimization. Hypotheses are used to convert data collected about customers into actionable insights. Every hypothesis is a theory that must be assessed; each idea that is proven to either hold true or false, confirms notions about customer expectations and behaviors, and drives the iterative optimization process.  

For example, an e-retail website could have a high rate of abandonment in the purchase flow. The hypothesis could be that links in the flow are distracting potential customers. The experiment would be to remove them. Should there be an improvement in the number of completed purchases, it would confirm the hypothesis. This would give a confirmed improved understanding of the retail website’s customers and their behavioral trends. This improved insight would help decide what could be optimized next, why, and how results could be measured. 

The following is how the same hypothesis could be defined in an ideal user story:  

  We believe that removing irrelevant links from the purchase page  

This Will result in improved customer conversion  

We will have the confidence to proceed when we see a 20% increase in customers who checkout their shopping cart with successful payment.  

Following the framework is an easy way to ensure you have thought of every aspect of the problem as well as the proposed solution before starting actual work on the project. The framework also ensures that only meaningful features are developed, by quantifying the benefits of these features. 

Best Practices in Hypothesis-driven Development 

The following are the best standards that will help teams ensure they implement HDD well: 

1. Gather Sufficient Data  

Data is what marks the difference between a well-formed hypothesis and a guess. To create a meaningful hypothesis, a business intelligence report is greatly helpful. By monitoring customer behavioral patterns using techniques like web analytics, and indirect sources like competitor overviews, a valuable profile about customers can be created. 

2. Create a Testable Hypothesis  

To adequately test the hypothesis, there need to be well-defined metrics with clear criteria for success and failure. For example, the hypothesis that removing unnecessary navigation links from the checkout page will ensure customers’ complete transactions can easily be assessed for correctness. The change in the company’s revenue will indicate whether or not the hypothesis was correctly defined. 

3. Measure the Experiment Results as per the Need  

The threshold used for determining success depends on the business and context. Not every company has the user sample size of Amazon or Google to run statistically significant experiments in a short period. Limits need to be defined by the organization to determine acceptable evidence thresholds that will allow the team to advance to the next step. 

For example, while building a vehicle, the experiments will have a high threshold for statistical significance. However, if the experiment is to decide between two different flows intended to help increase user sign-up, a lower significance threshold can be acceptable. 

4. State the Assumptions  

Clearly and visibly state any assumptions made about the hypothesis, to create a feedback loop for the team. This allows the team to provide further input, debate, and understand the circumstance of the test. The assumptions must make sense from a technical and business perspective. 

5. Use Insights to Drive Learning  

Hypothesis-driven experimentation gives comprehensive insights into customer behavior. These insights lead to additional questions about customers and the product experience and lead to an iterative learning process. 

The learning process follows this iterative pattern: 

  • Collect data about the product domain, customers, and use the knowledge gained to formulate questions 
  • Design a hypothesis based on the insights gained 
  • Create and implement a campaign derived from the hypothesis 
  • Inspect the results to determine whether the hypothesis is valid or not 
  • Document the conclusions  
  • Use the conclusions to formulate additional questions 
  • Connect the results to the problem being solved 

To ensure optimum learning, begin the entire process with a problem statement, and not a preferred solution. Should the solution fail to deliver the expected result, use the problem statement to analyze new potential solutions and repeat the process? This practice ensures focus on the problems being solved. 

What HDD has in Common With Strategies Like Design Thinking and Lean Startup 

Within large companies, innovation teams commonly use strategies like Design Thinking, Lean Startup, or Agile. Each of these strategies defines only one aspect of the product development process. However, each of these strategies does have certain key principles in common with HDD, which are highlighted below: 

  • Observe Humans to Learn 
  • The cost of development can be kept low if solutions are designed after observing client behaviour, and then iterating. HDD reinforces the “problem-first” mentality by first observing the target audience for the problem being solved. 
  • Focus on Client Actions 
  • This helps to prioritise which areas of the problem need to be targeted first. HDD focuses on the prioritisation of the problems being solved. 
  • Work Fast, but Keep the Blast Radius Minimal 

Even though continuous delivery is a crucial tactic, it cannot be at the expense of correctness. HDD does not promote reduced standards of work, even during the experimentation phase. 

    6.  Minimise Waste 

By focusing on the core of the problem being solved, product development teams ensure they do not waste time/money/resources on features that would not be used by clients.  

Teams can refine the framework they follow as per their needs, but HDD provides them with a foundation for the best practices used across each of these popular strategies.  

Identify How Your Team Has Successfully Implemented Hypothesis-driven Development 

It is necessary to have a capable monitoring and evaluation framework set up when using an experimental approach to software development to quantify the impact of efforts. These results are then used to generate feedback for the team. The learning that is gained during HDD can be the primary measure of progress for work done. 

Ideally, an iteration of HDD is not considered complete till there is a measurable value of what was delivered, that is, data to validate the hypothesis. 

Teams can ensure they have succeeded at implementing HDD by  measuring what makes a difference.  

  • Vanity metrics  are statistics that look good on the surface but do not necessarily provide meaningful business insights. These measurements are usually superficial, easily quantifiable, and sometimes, easily manipulated. Examples could include metrics on the number of social media followers or the number of visits to a promotional advertisement for a product. These metrics do not provide insights about what led to these numbers. 

They also do not reflect how these numbers can be achieved again or how they can be improved. 

  • Actionable metrics  are the metrics that have real significance. They can provide insights that help make decisions about what can be done next to achieve specific business goals. 

Distinguishing Between Actionable and Vanity Metrics: 

Metrics that run deeper than vanity metrics do not necessarily have to be actionable. For example, revenue measures a business’s performance, rather than the vanity metric of how many visitors the business website has. However, merely knowing the statistical changes does not indicate what causes these changes. If, for example, revenue increased but the reason for the increase was not identified, the business cannot repeat the actions made!  

Not just that, they do not have the means to further amplify the improvement! If, however, the revenue was measured before and after a noted change that affected a target set of users, then the business has an actionable metric at hand. They can then go ahead and employ the approach of hypothesis-driven development to try experiments and understand the best way to add value. 

Vanity metrics and actionable metrics can be measured for several actions.  

The first, and in today’s world, most significant, is website behaviour. Apart from this, teams can measure developer or team productivity, alongside bugs reported, and customer usage metrics. Knowing the count of lines of code written does not add value if the code is problematic and requires rework in the next development cycle. It also does not matter if the team is working fast but on a solution that no one will use. 

The relevant point is to work with the team to identify statistics that will provide value to the product development. The metrics should be shared with the team before, during and after any significant changes done to the product. 

Benefits of Hypothesis-driven Development

HDD is being adopted rapidly because of the several benefits it offers. 

hypothesis driven development example

1. HDD helps product teams prioritise the development of features,  

Teams can understand how the features are connected to the business goals. By tracking metrics before and after product deliveries, clear insights can be learned and worked upon for further growth. As long as teams keep in mind the end-users pain points, which can only be assessed by experimentation, they will deliver incrementally improved products. 

2. HDD enables tracking the desired and real outcomes of the development process.   

Each experiment is formulated to define the expected outcomes. These can be used to understand how the team’s development strategy can be revised for maximum gain. 

3. HDD is Cost-Effective  

It is more cost-effective to test the prototype before delivering the completed product to production.  

By constantly inquiring about the customer’s needs during the development process, the team will benefit from a feedback loop about the product’s performance, during the development phase itself. This will lead to minimal rework on the released product. 

4. Discover and Prioritize the Potential Benefits of Hypothesis-Driven Experiments  

Teams can easily understand the business benefits of changes they make. They can use these numbers to refine a company’s roadmap 

5. Establish a Common Language for Ideation and Research  

By following a framework for designing hypotheses, teams benefit from a standard way to define potential ideas. This enables development and research teams to collaborate and communicate more transparently. 

6. Choose Problems that are Aligned with the Company’s Challenges  

By reviewing the impact of experiments, teams ensure that while working on smaller goals, they stay aligned with the company’s end-state vision. 

7. Gain from Planning and Budgeting insights  

Other than measuring the outcome of experiments, teams can also measure the number of hypotheses being tested, the cost of these experiments and the time taken for each experiment. Advanced analysis can also help teams understand their development velocity. The measurable nature of the hypothesis-driven approach makes it simple for an organization to understand how to plan, budget for and undertake a hypothesis-driven approach. 

8. Quantify and Manage Risk  

Teams can understand how many hypotheses they need to validate or invalidate to determine whether they should make further investments in their product. Stakeholders can monitor the previously opaque process of risk evaluation, by evaluating the quality and quantity of hypotheses tested. 

9. Explicitly Documented Learning  

The hypothesis-driven approach for product development has an added benefit in that it explicitly captures lessons learned about the organization’s target market, customers, competitors, and products. A hypothesis-driven approach requires the thorough documentation and capture of each hypothesis, the details of the experiment as well as the results. This data becomes an invaluable store of information for the organization. 

10. HDD leads to better Technical Debt Management  

By reworking the product during the development phase, there are likely to be no surprises in customer reviews of the final product. This will ensure the technical debt is minimal and reduces overall development costs. 

Success Story of Hypothesis-driven Development 

A great example of a company that successfully used HDD while developing its main offering is Dropbox, the popular remote file-sharing service. Today, Dropbox is used by over  45 million users . However, when they initially launched, they had several competitors in the same domain. The key point was that each of these options was not well made. 

Dropbox’s initial hypothesis was that if they offered a well-executed file system, then people would be willing to pay to use it. 

When they began their experimentation, they used the persona hypothesis to define their ideal target user base. The persona they devised was of a technically-aware person, who would work in the software world. 

The solution they designed was centered to be ideal for the person they had devised. 

The Job To Be Done was sharing files between groups of people. The existing solutions at the time were manual, like file systems that needed to be backed up by an engineer. 

Dropbox’s value hypothesis was that a transparent remote file system would be adopted by several users, provided it was well-made. Dropbox needed to identify the demand for their solution. However, they did not have the resources at the time to create the product and have it validated by multiple users for the first round of experimentation of their hypothesis. They circumvented this blocker by releasing a video, which detailed their idea. The video was published online and advertised on development forums. 

The interest in their proposal was significant, enough to help them validate their proposed solution design, 

Conclusion  

Hypothesis-driven development draws its strength from the fact that the real world is complex, in a state of constant flux, and can sometimes be confusing. Consistent hypothesis-driven experimentation helps programs make a significant and beneficial impact on a company’s company objectives. Using data that is strongly coupled to the company’s vision ensures that focus is given to areas of significance for customers, rather than points that seem significant to a specific group of product managers.  

Remember, in the scientific world of development, data and facts will always trump intuition. 

Recommended Content

Share This Blog

Have a tech or business challenge? We can help!

Request your Case Study

Please leave this field empty.

People Also Read

BDD vs TDD : Highlighting the two important Quality Engineering Practices

10 Things that increase your product valuation and fuel growth

User Experience

Types of UX Deliverables : How To Bring In Best User Experience

Subscribe to our Blog

Subscribe to our newsletter to receive the latest thought leadership by Cuelogic experts, delivered straight to your inbox!

  • Product Engineering
  • Capabilities Portfolio
  • Unique Operating Model
  • Engineering Framework
  • Current Openings

We are Social

Subscribe to Our Insights

We don't spam!

We are hiring!

  • The future of data engineering in digital product engineering lies with Gen AI
  • Key considerations for Greenfield Product Engineering to build futuristic solutions
  • Data as a Product: The Role of Data Architecture and Data Modelling Strategy
  • Data as a Product: Data Architecture Principles for Management Blueprint
  • Core Principles Of Design Thinking

Privacy Policy

All Rights Reserved ©  Cuelogic 202 3

  • Discussions
  • Certificates
  • Collab Space
  • Course Details
  • Announcements

Hypothesis-driven approach: Problem solving in the context of global health

In this course, you will learn about the hypothesis-driven approach to problem-solving. This approach originated from academic research and later adopted in management consulting. The course consists of 4 modules that take you through a step-by-step process of solving problems in an effective and timely manner.

Course information

This course is also available in the following languages:

The hypothesis-driven approach is a problem-solving method that is necessary at WHO because the environment around us is changing rapidly. WHO needs a new way of problem-solving to process large amounts of information from different fields and deliver quick, tailored recommendations to meet the needs of Member States. The hypothesis-driven approach produces solutions quickly with continuous refinement throughout the research process.

What you'll learn

  • Define the most important questions to address.
  • Break down the question into components and develop an issue tree.
  • Develop and validate the hypothesis.
  • Synthesize findings and support recommendations by presenting evidence in a structured manner.

Who this course is for

  • This course is for everyone. Whether your position is in administrative, operations, or technical area of work, you’re sure to run into problems to solve. Problem-solving is a key skill to continue developing and refining—the hypothesis-driven approach will surely be a great addition in your toolbox!

Course contents

Introduction: hypothesis-driven approach to problem solving:, module 1: identify the question:, module 2: develop & validate hypothesis:, module 3: synthesize findings & make recommendations:, enroll me for this course, certificate requirements.

  • Gain a Record of Achievement by earning at least 80% of the maximum number of points from all graded assignments.
  • Gain a Confirmation of Participation by completing at least 80% of the course material.

Mobile Menu

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

HDD & More from Me

Reference for ‘Hypothesis-Driven Development (Book)

Table of Contents

About the Book

Chapter 1: you, and the business of ‘digital’, chapter 2: from idea to design, chapter 3: from design to code, chapter 4: from code to deploy, chapter 5: from release to experimentation, chapter 6: from inference to your next product priorities, chapter 7: the economics of code, chapter 8: application infrastructure, chapter 9: data, big data, data science & machine learning, chapter 10: security, chapter 11: growth hacking & revops, chapter 12: you, the hypothesis-driven developer.

Hello! This is a reference for the book ‘ Hypothesis-Driven Development: A Guide to Smarter Product Management “.

hypothesis driven development example

But waste is not inevitable anymore.

Hypothesis-Driven Development (HDD) is an emerging approach to digital product management for both business people and engineers. It emphasizes rigorous, continuous experimentation as a way to both minimize waste and focus teams’ creative capabilities in directions that drive growth and innovation.

The sections that follow detail the recommended practice for each chapter of the book . If you’re reading the book, I’ll just mention again that the idea is not to go through all off this practice each time you finish a chapter. Instead, what I’d recommend is finishing the book so you get the larger picture of how you might apply HDD to your work and then consider, week to week, month to month, where you might find the most relevant opportunities for practice, using this guide.

How do you prepare yourself for a successful career in tech?

After this chapter, you will be able to:

  • Explain the key operating foundations of tech and how it might relate to a given career trajectory
  • Analyze the economic significance and performance criteria of a technical team in terms of a product pipeline
  • Apply a disciplined, focused, yet innovation-friendly view of strategy to a tech product or company

hypothesis driven development example

The following subsections describe the recommended practice from Chapter 1.

Describe a Business and its Product/Market fit with the Business Model Canvas

Tutorial on the Business Model Canvas Printable Business Model Canvas Digitally Editable Canvas Template on Google Docs

For more depth on the topic, check out this online course: Online Course: Facilitating with the Business Model Canvas (≈4 hours including quizzes and practice)

Charter and Focus a Team with OKRs

John Doerr’s website has a set of examples I’ve found quite sufficient for getting started with OKR’s: What Matters: OKR Examples .

If you want a lot more depth, he also has a book on the topic: Measure What Matters .

Practice Agile

Online Tutorial: Agile: Just the Basics

If you prefer something more business school, here is a tech note I wrote with a colleague: Agile Development

Finally, if you want something more directed and even broader because knowing about agile is one of your top priorities right now, here is my online course: Managing with Agile .

Strategy Meets Digital

For this, I highly recommend my colleague Mike Lenox’s book on digital transformation (coming soon) or his online course ‘ Digital Transformation ‘.

Teaching a Degree Program Course or Workshop with This Material

The links below reference full syllabi, including assignment templates, for a few HDD-related classes I teach: Digital Product Management Software Design Software Development Digital Capstone Hypothesis-Driven Development for Analysts [COMING SOON]

As the business lead on a tech team, what is your role and how do you do it well?

  • Explain the concept of product/market fit in economic terms
  • Analyze the work of a business or product lead in terms of a sequence of testable hypotheses that minimize waste and maximize wins
  • Do the work to make sure your team is focused on a job/problem/need/desire that actually exists for a certain identifiable buyer or user.
  • Avoid false positives and minimize waste by applying Lean Startup to testing whether your proposition is competitive with the prevailing alternatives
  • Make a habit of designing for and testing usability early and often so you minimize friction for the user
  • Identify the analytics necessary to make crisp decisions for an iterative, emergent design

hypothesis driven development example

The following subsections describe the recommended practice from Chapter 2.

Build Your Personal Innovation Portfolio

This tutorial is a good place to start: Creating Your Personal Innovation Portfolio . It offers example entries ( like this one ) and templates in Google Slides you can use to jump start the process.

The page offers examples and starter templates for project types like customer discovery, application design, application development, and data science.

Focus and Test a Right Problem (Persona & JTBD) Hypothesis with Subject Interviews

This template is a good place to start: HDD Template/Personas, JTBD, and Discoveries . A good batch of steps is to: a) draft personas & JTBD, b) draft an interview guide and conduct interviews, c) revise your draft personas and JTBD.

‘Day in the Life’ is another related tool, particularly useful for bringing your discovery work to life for colleagues who weren’t out there with you. This tutorial offers notes, examples, and a template: Day in the Life Tutorial .

Focus And Test A Demand Hypothesis With Lean Startup

This tutorial is an excellent place to start: Tutorial on Proposition Testing with Lean Startup . It links to a template, which is a section in the same HDD Template I mentioned above (in the section on Right Problem Hypothesis).

Focus and Test a Usability Hypothesis with User Stories

This tutorial is an excellent place to start: Tutorial on Anchoring a Usability Hypothesis with User Stories . Here again, the HDD Template above also offers a place to organize and integrate this material with your work on ‘right problem’ and demand/proposition testing.

Define and Apply Visual Consistency

This tutorial offers a step by step view on how to do this: Tutorial on Creating Visual Consistency with a Style Guide . It also links to a set of example guides (ex: COWAN+ ) whose format you can use as a starting point.

Learn More about the Practice of Design and Design Thinking

For more on the core practice of product design, you may be interested in Donald Norman’s book The Design of Everyday Things , which is heavily cited in my HDD book.

For more on the practice of design thinking, applying the lens of product design to a wider set of business problems, I highly recommend The Designing for Growth Field Book .

What does it take to go from design to code? Do you need to learn to code? If so, what does that mean? When and why does it matter?

  • Unpack digital infrastructure in terms of the model-view-controller (MVC) framework
  • Identify and practice the key steps to creating a View
  • Analyze and decompose a user experience in terms of its Controllers or algorithms
  • Structure real world entities in terms of an implementation-friendly Model

hypothesis driven development example

The following subsections describe the recommended practice from Chapter 3. This sub-sections lean pretty heavily on the MVC (model-view-controller) framework. For a review on that, check out this 5 minute video ( MVC Tutorial ).

Create a Style Guide to Delivery Consistent Visceral Reactions to Your Views

If you already did this in the recommended practice for Chapter 2, you’re probably in good shape. However, if you’re getting ready to go from design to code on something, this is a useful thing to do now- even if it’s just a text file with some notes on which colors and typefaces you’re going to use. The tutorial here will help you get started: Tutorial on Creating Visual Consistency with a Style Guide .

Create Views with HTML & CSS

The #1 most important thing for this practice and the practice in the sub-sections that follow has nothing to do with coding itself. The most important thing is to have an idea that you’ve thought through. This could be a whole application or even just a simple interaction you’ve noticed and would like to try improving. The reason this is so, so important is that without some kind of focal point for what you want to have happen, you’ll likely get lost in all the hooks and dials and technical minutiae of the various coding facilities. This is a bad thing. Even professional developers don’t go through and memorize everything a given programming language can do. They figure out what they want to have happen first, and then figure out how to make it happen with the coding facility in question.

Yes, there are some fundamentals for each programming language and there’s an experience curve across which going from design to code gets easier. The various options below (online courses, cases, and tutorials) all focus on introducing fundamentals and then quickly transitioning you to a specific problem/task for applied practice. You’ll need to use references (aka Google search) to figure out individual items, like ‘How do I make a rounded border with CSS?’. However, what you should not do is first try to memorize all the hundreds of things that HTML or CSS can do. All that said, here are a few ways to get started with creating Views.

If you like to dive in and just start fiddling, you may want to start with the various case challenges, which you can find here: Coding Cases (Design to View) . The first three cases (From Prototype to HTML, Making HTML Manageable, and Debugging HTML & CSS) will give you a solid footing in going from design to code in this area. From there, you can use the same development environment to start working on your own project.

If you’d like something more structured and stepwise, I’ve created a set of online courses around those same cases and some of the related reading. The first one, Coding for Designers, Managers, and Entrepreneurs I , will take you through the cases above.

Finally, if you’re interested in teaching a class or even just maybe a peer support group (kind of like a book club for product people- but for coding!), then you may want to check out the syllabus I use for the degree program course in this area that I teach: Software Development . The first five sessions deal with creating Views with HTML & CSS.

Creating View Interactions with Javascript

Somewhere between the View and the Controller (words are faulty instruments), there’s the ‘front controller’, basically logic that creates interactive Views- views that afford more dynamic responses to user input for example.

If you want to dive in and start fiddling, check out this case: Making Stuff Happen with Javascript , and then invest some time on refining your skills with analytical debugging with this case: Debugging Javascript .

If you want something more step by step, check out the second course in the series above ( Coding for Designers, Managers, and Entrepreneurs II ).

Finally, if you’re looking to teach or facilitate a class/study group, sessions 7 & 8 from the Software Development syllabus in the section above focus on this aspect of going from design to code.

Creating Controllers with Javascript

Here you’ll transition more squarely to going from design to code with Controllers or ‘algorithms’.

If you want to dive in and start fiddling, this case deals with process automation and data transformation (super fun): Automating Your Gruntwork with Javascript .

Course II in the online course series (see above) offers a more step by step approach and session 9 from the syllabus above is where you’d do this case.

Mapping Data to Models and Operationalizing Them

Finally, putting the ‘M’ in MVC, we have a case on going from design to code with a data model. Rather straight away diving into all the mess and complexity of databases, the material here instead uses a modern NoSQL approach by way of Google’s Firebase ‘backend as a service’.

If you want to dive in and start fiddling, the case is: Creating & Managing Users with Google Firebase .

Course III in the course series above covers this case and the related concepts, as does session 10 in the syllabus (see above).

How do you rapidly get new features in front of customers without stressing your team? What investments does this require and how do you evaluate them?

  • Explain the major catalysts for change and success criteria for a modern, continuous product pipeline
  • Unpack the steps from test to deploy for a given application
  • Explain the role of version control in a product pipeline
  • Analyze and evaluate investments in test capabilities, automated or otherwise
  • Analyze and evaluate investments in deploy capabilities

hypothesis driven development example

The following subsections describe the recommended practice from Chapter 4.

Sketch Your Team’s Product Pipeline

The basic idea here is that you can’t improve a pipeline that you don’t (as a team) understand. The objective of this workshop/meeting is primarily just to sketch out an answer to ‘What is?’. Most likely, a set of ideas about where the roughest spots are and what changes you might want to test will arise from that naturally. Here are some notes on how you might do that with your team.

Who should be there? Ideally, everyone on your team and, if the team’s not dedicated, then also all colleagues you regularly work with across the pipeline, including, say- test, sysadmin/ops, and analytics. Is that absolutely necessary? No, not to get started. It’s better to start texturing out the pipeline with a smaller group than to delay. But ultimately you’ll be best off with a fuller group for identifying, prioritizing, and testing changes.

What’s the agenda? Have a whiteboard (physical or virtual) and bracket the pipeline as we’ve done here where it starts with ‘idea’ and finishes with ‘released software’. You may want to use one of the pipeline diagrams from the book/this site to introduce the general idea, but the terms your team or company uses may be different and that’s OK- the idea here is just to describe ‘What is?’.

From there, the idea is to fill in what happens between idea and released software. I highly recommend coming in with 1-2 specific examples, features that you’ve already released, and using those to anchor and the discussion and get all the way through the process. The current process may have forks, etc.- sometimes you do one thing, sometimes the other. I would get those both up there as messy as it may be- the idea here is to get to what is, and then, in small, success-based batches, test your way to what you all think should be.

Then what? Getting a bunch of ideas on the board about which parts of the pipeline you want to improve is usually not too hard. You can simply right them on the board, take notes, or, have individuals write ideas on post-it’s (physical or virtual) and post them on the wall (again, physical or virtual). Converging on a prioritized set of choices is usually a little harder. Practices like dot voting can make that a little easier and more collaborative. However, the big thing to focus on is that you’ve got to start somewhere, each of these ideas has a specific, observable (ideally measurable) outcome that you’re testing it against, and you’ll by doing just then in your next or future team retrospective meeting after you’ve had a chance to test these ideas for improvement.

Concept and Prioritize Testable Changes to Your Product Pipeline

(see notes on the subsection above)

Draft an Agile Team Charter

The fundamental idea here is to have an entry point that answer the questions “What is this team working on and why?” as well as “How is the team currently collaborating?”. Does that mean everything that answers these questions needs to reside in the team charter itself? No. For example, you may have seen the HDD Template in some of the sections above, and it has some overlap with the team charter template. That’s fine. The core thing with the charter is to have an entry point. So, for example, if your team is happily keeping notes on its personas, JTBD, customer interviews, or other evidence someplace else just link to that from your charter. If you all keep meeting notes from your agile retrospectives someplace else, same idea- just link to it from your charter.

How do you know if it’s working? I’d look at two things. First, how often do various members of the team visit it? If it’s a Google Doc you have ready access to this, for example. Second, how well does it help onboarding new employees or team members?

All that said, this tutorial is a good place to start: Tutorial on Agile Team Charters . If you just want to get started, here is the template (just make yourself a copy): Team Charter Template .

Explain the Fundamentals of DevOps and Continuous Delivery

If you’re all in on the DORA metrics and generally like learning by book, Accelerate by Nicole Forsgren, Jez Humble, and Gene Kim is a great place to go next.

If you might prefer online courses, well, then I can’t help but recommend my own masterwork: Continuous Delivery & DevOps (Coursera) .

Learn How to Manage Work with Version Control

On this one in particular, I would not invest a lot of time unless either a) you’ll be interacting with version control at least once in awhile or b) you just want to try it out and you’re OK forgetting most of the particulars if you don’t use it for, say, six months.

If you’re ready to give in, Github themselves (the leading VCS) offer a pretty good written tutorial: Hello World- Github .

If you want something more step by step and explanatory, Codecademy offers a good (free) online tutorial with hands-on practice: Codecademy Micro Course on Github .

Write Some Test to See How it Works

The Coursera course I mentioned above (Continuous Delivery & DevOps) steps through specific test development across unit, integration, and system tests.

If you want something more immediately hands-on, Codecademy offers tutorials here as well (though this one is paid only)- for example, this one is on unit testing Javascript: Unit Testing with JS .

Finally, it’s worth noting here that across all the different programming languages and the different layers of the test pyramid, there is a lot of stuff out there. If you’re curious about how testing happens in your current work, I would find out what toolchain your team uses and find resources to match- there will be plenty, you just have to know what you’re looking for. If you want a more general introduction that’s more in-depth than what you saw in Chapter 4, I would consider the online course:  Continuous Delivery & DevOps (Coursera) .

Understand the Managerial Side of How a Team gets to Healthier Pipelines

Full disclosure: this is currently a placeholder for something I’m working with some collaborators. I think it’s going to be useful if a general management perspective is what you’re after, but it’s not quite ready. That said, I think the items above are an excellent way to start on this.

How do you make a habit out of experimentation? How do you make it useful enough for the team and easy enough to consistently do as you release?

  • Explain the fundamentals of experimentation in digital
  • Pair important questions with rigorous but practical experiment designs
  • Make a habit of pairing focused, relevant analytical questions with your agile user stories for purposeful analytics

hypothesis driven development example

The following subsections describe the recommended practice from Chapter 5.

Practice and Apply Innovation Analytics to Your work

As I mentioned in the book, my #1 recommendation here is to go deep on using experiments in your current work- whatever analytics suite you have, etc., it doesn’t really matter that much. Relevance is the big thing. I’d start by figuring out what questions you want to answer, how you might answer them (tools aside), and then figuring it yourself or working with colleagues, go get yourself some answers!

That said, if you want more depth on general practice, I recommend my own super great online course: Agile Analytics . I’m a biased? Yes. But, I did build the course specifically around the various focal points you’ve seen in the book so, for my money, it’s a good place to start.

That said, there are a lot of really good items out there. Stephen Thomke’s book Experimentation Works is an excellent intro to how companies are changing the way they operate with experiments. Trustworthy Online Controlled Experiments by Kohavi, et all is a more clinical take and widely read.

Design & Conduct an Experiment

#1 tip for getting started with experiments? Find something that interest you and an idea to make it better. Then test it. This section of the HDD template: Experiment Design is a good place to start.

The only other thing worth mentioning is that I would make sure you have a point of view on what general type of hypothesis you want to test. If it’s a motivation/demand hypothesis, look for relevant MVP patterns and make sure you’re not confusing it with a hypothesis about usability. Likewise, if it’s a hypothesis about usability, make sure you’re testing the subject’s ability to achieve a goal assuming motivation. Finally, if it’s a simple test in the wild (released software) of user action vs. inaction, that’s also fine- just make sure that’s clear in your hypothesis formulation.

Understand Frequentist vs. Bayesian Statistical Reasoning

Beyond what we’ve covered in the chapter, Google has done some excellent work in support of their Optimize tool. For more depth, this post is a good place to start: General Methodology . For frequentist statistics, just about any leading stat’s primer (which are the only ones you’ll find) is fine. For more on Bayesian inference, this is good as a follow-on from the Google piece: Bayesian Inference .

How do you know if you got it right, or you need to rinse and repeat? How do you instrument observation and set thresholds to make the hard but necessary decisions about what to prioritize?

  • Start from a set of objectives and key results to charter a teams ‘true north’ success criteria
  • Work backward from tangible goals in your descriptive analytics
  • Think of your experiment designs as part of a longer game of adaptively getting to product/market fit

hypothesis driven development example

The following subsections describe the recommended practice from Chapter 6.

Sketch a CX Map

Here again, the main thing is just to pick something familiar with (a specific value proposition relative to a specific job-to-be-done) and start sketching. This tutorial offers a quick reference: CX Mapping , and you can start sketch on this section ( CX Map ) of the HDD template.

Take Your Metrics and Make them Better

My general preference is to start with the CX Map, but what if you already have a measurement infrastructure you generally like? The CX Map certainly is not the only way to go. This subsection of that tutorial offers a more general point of view on how to assess and (where applicable) improve your metrics from an HDD perspective: Making Better Metrics .

What drives the cost of releasing a successful feature? And how does that relate to the technology choices a team makes when building an application?

  • Explain the major functional and economic dimensions between programming languages, libraries, and frameworks
  • Identify and analyze the functional and economic footprint of application frameworks and development
  • Analyze the economics of monoliths vs. microservices relative to an application’s key economic drivers

hypothesis driven development example

The following subsections describe the recommended practice from Chapter 7.

Learn about Your Product’s Stack or Stacks

Find the right person and start with a simple question like “What’s our tech stack?” or “What’s our tech stack for [specific application]?” More questions may not be necessary, but you can always follow that up with “How is that stack for you? How did you decide on it?” Be sure to express plain curiosity and you’ll almost certainly learn all you want and more!

Where do applications run? Given the pain points of deploying applications and keeping them running, what is the marketplace for approaches to do this? When, why, and how are those economical?

  • Explain the fundamentals of the stack where your application runs and its key outside interfaces
  • Unpack the key cost drivers for operating your application and evaluate alternative solutions like data centers vs. cloud vs. platforms-as-a-service
  • Explain the relationship between data exchanged through API’s or data-interchange formats vs. databases

hypothesis driven development example

The following subsections describe the recommended practice from Chapter 8.

Learn more about Technology, Economics, and Design Interact in Application Infrastructure

If you like to learn through case studies, as we do at UVA Darden, this is a case study on retrieving a large scale IT project: Saving Griffin . That page also has a teaching note for the case and a tech note on agile- the tech note is a subset of the material in this book (not surprisingly), but if you’re doing an introduction for others you might find it useful.

Get Hands-On Practice Coding with API’s

This case will give you hands-on experience interacting with the API for Google Firebase, a very substantial backend-as-a-service platform: Creating & Managing Users with Google Firebase . It requires some familiarity with Javascript- but you can get that over the course of a few cases which I mentioned up in the materials for Chapter 3.

Deploy a Data Science Model to the Web

While it requires some familiarity with Javascript (for the client application), Python, and building predictive models with Python, it is amazingly easy to set up your own functioning application with multiple services (microservices, if you will). This ‘cookbook’ is a working item in Google Docs and will help you get started: Notes on Deploying Your Model with Flask . Finally, it’s worth noting that Flask itself is quite popular and well liked and you’ll find lots of other great resources with a few Google searches.

What can you do with a lot of data? How do you decide when, where, and how to invest? How do you integrate such a capability with your general product program to maximize its relevance?

  • Analyze how and where to store your data to maximize your economics
  • Frame your customer discovery in terms of ground truth and link it to dependent variables for actionability
  • Identify and manage data science charters across descriptive, diagnostic, predictive, and prescriptive analytics
  • Understand the data science process from data collection, preparation, exploration, modeling, and communication
  • Identify key data science skill sets, including methods and statistical literacy, mechanics & algorithms, data intuition, and communication

hypothesis driven development example

The following subsections describe the recommended practice from Chapter 9.

Charter a Big Data Program with the Data Innovation Canvas

Data may be the new oil in a certain sense, but not all data is valuable. The Data Innovation Canvas is a thorough but lightweight tool for thinking through which data might end up valuable for you, both in terms of data you have and in terms of data you can create. This template on Google Slides (which you can download for PowerPoint) is a good way to get started: Data Innovation Canvas .

Get Hands-On Practice with Data

If SQL literacy is a big thing for you on the job, I recommend this relatively short (and free) tutorial from Codecademy: Learn SQL . The main thing is not to spend too much time on details ahead of meaningful, active practice either on the job or on a specific side project with some relevance to you.

If you want a more expansive introduction to data science, I like Jose Portilla’s course on Udemy: Python for Data Science and Machine Learning Bootcamp . There are plenty of resources, though I would avoid those with the language R and find ones that use Python, which is more generally applicable and ascendant. Here too, I’d look to go through the course relatively quickly and not worry about everything with an eye to getting started with active practice on a specific project.

Where’s the insecurity in digital? It’s pretty much everywhere, but how do you identify, prioritize, and manage risk?

  • Frame vulnerabilities across human and machine attack surfaces
  • Understand several of the most prevalent attack vectors and publicly available data sources to identify them
  • Identify how to link security with the rest of your product pipeline

Please Note: This chapter is terrifying and you should only read it if you want to know the truth.

hypothesis driven development example

The following subsections describe the recommended practice from Chapter 10.

Take Your Security Team Out for a Drink

Just do it. They need it. You’ll learn everything you wanted to know and more and you’ll never look at your computer the same way.

How has the move from physical products and print media to digital changed the way we test what will engage a customer?

  • Charter the activities of a digital marketing/promotion/growth hacking team based on your understanding of product/market fit
  • Understand the relationship between and strengths of organic and paid channels online
  • Participate in the design, execution, and evaluation of growth experiments

hypothesis driven development example

The following subsections describe the recommended practice from Chapter 11.

Sketch a CX Through the Hooked Framework

In my experience, the best thing is just to put pen to paper for a product you know. And don’t assume that the Hook doesn’t apply to enterprise/B2B products. Here’s a tutorial with some examples: Hook Framework .

Frame Your Point of View on Growth using the Growth Hacking Canvas

A lot of marketing is done with the age-old justification that “we’ve always done it this way” with the supporting evidence that things aren’t that bad (yet). This isn’t necessarily the best way to reduce waste and drive new, organic growth, though. Here’s a tutorial that steps through the Canvas: Tutorial on Scaling P/M Fit with the Growth Hacking Canvas . Or, if you just want to get started, here’s a template on Google Slides: Template- Growth Hacking Canvas .

Think through and Describe Your Brand Personality with a Moodboard

This is a great way to both think through your brand personality and connect it with a relevant visual execution: Brand Lattice (Moodboard Tool) .

Facilitate Consistency with a Style Guide

OK, OK, this is my third mention, but if you don’t have one, this is definitely something I’d create (see above).

What are your next steps?

  • Prioritize and charter your own professional development based on your particular focus and current activities
  • Facilitate HDD-friendly team charters to help your team improve its practice of agile through a focus on testable outcomes
  • Facilitate clear, testable points of view on business model design to help align the work of teams
  • For larger companies, charter, focus, and steward and innovation program with testable charters and governance

hypothesis driven development example

The following subsections describe the recommended practice from Chapter 12.

Summary Reference for the Book

(this page is the summary reference)

Create a Personal Innovation Portfolio (Recap)

This is a great way to both focus and showcase your practice of HDD: see above on Innovation Portfolio .

Sketch a Business Model Design (Recap)

If you want to make the product or line of business you’re working on easier to align with, this is a great place to start. See above on the Business Model Canvas .

Draft an Agile Team Charter (Recap)

If you want to facilitate aligned, autonomous, hypothesis-driven executions within a team, this is a great place to start. See above on Agile Team Charters .

Charter a Company Innovation Strategy with the Corporate Innovation Canvas

If you’re at a larger firm with multiple lines of business and outside investments, the Corporate Innovation Canvas is a quick and easy way to think about how all that is cohering (or not!), where you’d like to focus, and how you’ll evaluate success: Tutorial on Innovation Strategy with the Corporate Innovation Canvas .

Copyright © 2022 Alex Cowan · All rights reserved.

  • Harvard Business School →
  • Faculty & Research →
  • December 2011 (Revised July 2013)
  • Background Note
  • HBS Case Collection

Hypothesis-Driven Entrepreneurship: The Lean Startup

  • Format: Print
  • | Language: English
  • | Pages: 26

About The Author

hypothesis driven development example

Thomas R. Eisenmann

Related work.

  • Faculty Research
  • Hypothesis-Driven Entrepreneurship: The Lean Startup  By: Thomas Eisenmann, Eric Ries and Sarah Dillard

hypothesis driven development example

  • Business & Money

Amazon prime logo

Enjoy fast, free delivery, exclusive deals, and award-winning movies & TV shows with Prime Try Prime and start saving today with fast, free delivery

Amazon Prime includes:

Fast, FREE Delivery is available to Prime members. To join, select "Try Amazon Prime and start saving today with Fast, FREE Delivery" below the Add to Cart button.

  • Cardmembers earn 5% Back at Amazon.com with a Prime Credit Card.
  • Unlimited Free Two-Day Delivery
  • Streaming of thousands of movies and TV shows with limited ads on Prime Video.
  • A Kindle book to borrow for free each month - with no due dates
  • Listen to over 2 million songs and hundreds of playlists
  • Unlimited photo storage with anywhere access

Important:  Your credit card will NOT be charged when you start your free trial or if you cancel during the trial period. If you're happy with Amazon Prime, do nothing. At the end of the free trial, your membership will automatically upgrade to a monthly membership.

Return this item for free

Free returns are available for the shipping address you chose. You can return the item for any reason in new and unused condition: no shipping charges

  • Go to your orders and start the return
  • Select your preferred free shipping option
  • Drop off and leave!

Kindle app logo image

Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required .

Read instantly on your browser with Kindle for Web.

Using your mobile phone camera - scan the code below and download the Kindle app.

QR code to download the Kindle App

Image Unavailable

Hypothesis Driven Development A Complete Guide - 2020 Edition

  • To view this video download Flash Player

hypothesis driven development example

Hypothesis Driven Development A Complete Guide - 2020 Edition Paperback – March 8, 2021

Purchase options and add-ons.

Are events managed to resolution? Do you verify the acceptability of software used in product development? What research opportunities exist here? Is the Hypothesis-Driven Development documentation thorough? What are the tasks and definitions?

Defining, designing, creating, and implementing a process to solve a challenge or meet an objective is the most valuable role… In EVERY group, company, organization and department.

Unless you are talking a one-time, single-use project, there should be a process. Whether that process is managed and implemented by humans, AI, or a combination of the two, it needs to be designed by someone with a complex enough perspective to ask the right questions. Someone capable of asking the right questions and step back and say, 'What are we really trying to accomplish here? And is there a different way to look at it?'

This Self-Assessment empowers people to do just that - whether their title is entrepreneur, manager, consultant, (Vice-)President, CxO etc... - they are the people who rule the future. They are the person who asks the right questions to make Hypothesis Driven Development investments work better.

This Hypothesis Driven Development All-Inclusive Self-Assessment enables You to be that person.

All the tools you need to an in-depth Hypothesis Driven Development Self-Assessment. Featuring 2220 new and updated case-based questions, organized into seven core areas of process design, this Self-Assessment will help you identify areas in which Hypothesis Driven Development improvements can be made.

In using the questions you will be better able to:

- diagnose Hypothesis Driven Development projects, initiatives, organizations, businesses and processes using accepted diagnostic standards and practices

- implement evidence-based best practice strategies aligned with overall goals

- integrate recent advances in Hypothesis Driven Development and process design strategies into practice according to best practice guidelines

Using a Self-Assessment tool known as the Hypothesis Driven Development Scorecard, you will develop a clear picture of which Hypothesis Driven Development areas need attention.

Your purchase includes access details to the Hypothesis Driven Development self-assessment dashboard download which gives you your dynamically prioritized projects-ready tool and shows your organization exactly what to do next. You will receive the following contents with New and Updated specific criteria:

- The latest quick edition of the book in PDF

- The latest complete edition of the book in PDF, which criteria correspond to the criteria in...

- The Self-Assessment Excel Dashboard

- Example pre-filled Self-Assessment Excel Dashboard to get familiar with results generation

- In-depth and specific Hypothesis Driven Development Checklists

- Project management checklists and templates to assist with implementation

INCLUDES LIFETIME SELF ASSESSMENT UPDATES

Every self assessment comes with Lifetime Updates and Lifetime Free Updated Books. Lifetime Updates is an industry-first feature which allows you to receive verified self assessment updates, ensuring you always have the most accurate information at your fingertips.

  • Print length 315 pages
  • Language English
  • Publication date March 8, 2021
  • Dimensions 6 x 0.79 x 9 inches
  • ISBN-10 1867306972
  • ISBN-13 978-1867306979
  • See all details

The Amazon Book Review

Product details

  • Publisher ‏ : ‎ 5STARCooks (March 8, 2021)
  • Language ‏ : ‎ English
  • Paperback ‏ : ‎ 315 pages
  • ISBN-10 ‏ : ‎ 1867306972
  • ISBN-13 ‏ : ‎ 978-1867306979
  • Item Weight ‏ : ‎ 14.9 ounces
  • Dimensions ‏ : ‎ 6 x 0.79 x 9 inches

Customer reviews

Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.

To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.

No customer reviews

  • Amazon Newsletter
  • About Amazon
  • Accessibility
  • Sustainability
  • Press Center
  • Investor Relations
  • Amazon Devices
  • Amazon Science
  • Sell on Amazon
  • Sell apps on Amazon
  • Supply to Amazon
  • Protect & Build Your Brand
  • Become an Affiliate
  • Become a Delivery Driver
  • Start a Package Delivery Business
  • Advertise Your Products
  • Self-Publish with Us
  • Become an Amazon Hub Partner
  • › See More Ways to Make Money
  • Amazon Visa
  • Amazon Store Card
  • Amazon Secured Card
  • Amazon Business Card
  • Shop with Points
  • Credit Card Marketplace
  • Reload Your Balance
  • Amazon Currency Converter
  • Your Account
  • Your Orders
  • Shipping Rates & Policies
  • Amazon Prime
  • Returns & Replacements
  • Manage Your Content and Devices
  • Recalls and Product Safety Alerts
  • Conditions of Use
  • Privacy Notice
  • Consumer Health Data Privacy Disclosure
  • Your Ads Privacy Choices

How to Generate and Validate Product Hypotheses

hypothesis driven development example

Every product owner knows that it takes effort to build something that'll cater to user needs. You'll have to make many tough calls if you wish to grow the company and evolve the product so it delivers more value. But how do you decide what to change in the product, your marketing strategy, or the overall direction to succeed? And how do you make a product that truly resonates with your target audience?

There are many unknowns in business, so many fundamental decisions start from a simple "what if?". But they can't be based on guesses, as you need some proof to fill in the blanks reasonably.

Because there's no universal recipe for successfully building a product, teams collect data, do research, study the dynamics, and generate hypotheses according to the given facts. They then take corresponding actions to find out whether they were right or wrong, make conclusions, and most likely restart the process again.

On this page, we thoroughly inspect product hypotheses. We'll go over what they are, how to create hypothesis statements and validate them, and what goes after this step.

What Is a Hypothesis in Product Management?

A hypothesis in product development and product management is a statement or assumption about the product, planned feature, market, or customer (e.g., their needs, behavior, or expectations) that you can put to the test, evaluate, and base your further decisions on . This may, for instance, regard the upcoming product changes as well as the impact they can result in.

A hypothesis implies that there is limited knowledge. Hence, the teams need to undergo testing activities to validate their ideas and confirm whether they are true or false.

What Is a Product Hypothesis?

Hypotheses guide the product development process and may point at important findings to help build a better product that'll serve user needs. In essence, teams create hypothesis statements in an attempt to improve the offering, boost engagement, increase revenue, find product-market fit quicker, or for other business-related reasons.

It's sort of like an experiment with trial and error, yet, it is data-driven and should be unbiased . This means that teams don't make assumptions out of the blue. Instead, they turn to the collected data, conducted market research , and factual information, which helps avoid completely missing the mark. The obtained results are then carefully analyzed and may influence decision-making.

Such experiments backed by data and analysis are an integral aspect of successful product development and allow startups or businesses to dodge costly startup mistakes .

‍ When do teams create hypothesis statements and validate them? To some extent, hypothesis testing is an ongoing process to work on constantly. It may occur during various product development life cycle stages, from early phases like initiation to late ones like scaling.

In any event, the key here is learning how to generate hypothesis statements and validate them effectively. We'll go over this in more detail later on.

Idea vs. Hypothesis Compared

You might be wondering whether ideas and hypotheses are the same thing. Well, there are a few distinctions.

What's the difference between an idea and a hypothesis?

An idea is simply a suggested proposal. Say, a teammate comes up with something you can bring to life during a brainstorming session or pitches in a suggestion like "How about we shorten the checkout process?". You can jot down such ideas and then consider working on them if they'll truly make a difference and improve the product, strategy, or result in other business benefits. Ideas may thus be used as the hypothesis foundation when you decide to prove a concept.

A hypothesis is the next step, when an idea gets wrapped with specifics to become an assumption that may be tested. As such, you can refine the idea by adding details to it. The previously mentioned idea can be worded into a product hypothesis statement like: "The cart abandonment rate is high, and many users flee at checkout. But if we shorten the checkout process by cutting down the number of steps to only two and get rid of four excessive fields, we'll simplify the user journey, boost satisfaction, and may get up to 15% more completed orders".

A hypothesis is something you can test in an attempt to reach a certain goal. Testing isn't obligatory in this scenario, of course, but the idea may be tested if you weigh the pros and cons and decide that the required effort is worth a try. We'll explain how to create hypothesis statements next.

hypothesis driven development example

How to Generate a Hypothesis for a Product

The last thing those developing a product want is to invest time and effort into something that won't bring any visible results, fall short of customer expectations, or won't live up to their needs. Therefore, to increase the chances of achieving a successful outcome and product-led growth , teams may need to revisit their product development approach by optimizing one of the starting points of the process: learning to make reasonable product hypotheses.

If the entire procedure is structured, this may assist you during such stages as the discovery phase and raise the odds of reaching your product goals and setting your business up for success. Yet, what's the entire process like?

How hypothesis generation and validation works

  • It all starts with identifying an existing problem . Is there a product area that's experiencing a downfall, a visible trend, or a market gap? Are users often complaining about something in their feedback? Or is there something you're willing to change (say, if you aim to get more profit, increase engagement, optimize a process, expand to a new market, or reach your OKRs and KPIs faster)?
  • Teams then need to work on formulating a hypothesis . They put the statement into concise and short wording that describes what is expected to achieve. Importantly, it has to be relevant, actionable, backed by data, and without generalizations.
  • Next, they have to test the hypothesis by running experiments to validate it (for instance, via A/B or multivariate testing, prototyping, feedback collection, or other ways).
  • Then, the obtained results of the test must be analyzed . Did one element or page version outperform the other? Depending on what you're testing, you can look into various merits or product performance metrics (such as the click rate, bounce rate, or the number of sign-ups) to assess whether your prediction was correct.
  • Finally, the teams can make conclusions that could lead to data-driven decisions. For example, they can make corresponding changes or roll back a step.

How Else Can You Generate Product Hypotheses?

Such processes imply sharing ideas when a problem is spotted by digging deep into facts and studying the possible risks, goals, benefits, and outcomes. You may apply various MVP tools like (FigJam, Notion, or Miro) that were designed to simplify brainstorming sessions, systemize pitched suggestions, and keep everyone organized without losing any ideas.

Predictive product analysis can also be integrated into this process, leveraging data and insights to anticipate market trends and consumer preferences, thus enhancing decision-making and product development strategies. This approach fosters a more proactive and informed approach to innovation, ensuring products are not only relevant but also resonate with the target audience, ultimately increasing their chances of success in the market.

Besides, you can settle on one of the many frameworks that facilitate decision-making processes , ideation phases, or feature prioritization . Such frameworks are best applicable if you need to test your assumptions and structure the validation process. These are a few common ones if you're looking toward a systematic approach:

  • Business Model Canvas (used to establish the foundation of the business model and helps find answers to vitals like your value proposition, finding the right customer segment, or the ways to make revenue);
  • Lean Startup framework (the lean startup framework uses a diagram-like format for capturing major processes and can be handy for testing various hypotheses like how much value a product brings or assumptions on personas, the problem, growth, etc.);
  • Design Thinking Process (is all about interactive learning and involves getting an in-depth understanding of the customer needs and pain points, which can be formulated into hypotheses followed by simple prototypes and tests).

Need a hand with product development?

Upsilon's team of pros is ready to share our expertise in building tech products.

hypothesis driven development example

How to Make a Hypothesis Statement for a Product

Once you've indicated the addressable problem or opportunity and broken down the issue in focus, you need to work on formulating the hypotheses and associated tasks. By the way, it works the same way if you want to prove that something will be false (a.k.a null hypothesis).

If you're unsure how to write a hypothesis statement, let's explore the essential steps that'll set you on the right track.

Making a Product Hypothesis Statement

Step 1: Allocate the Variable Components

Product hypotheses are generally different for each case, so begin by pinpointing the major variables, i.e., the cause and effect . You'll need to outline what you think is supposed to happen if a change or action gets implemented.

Put simply, the "cause" is what you're planning to change, and the "effect" is what will indicate whether the change is bringing in the expected results. Falling back on the example we brought up earlier, the ineffective checkout process can be the cause, while the increased percentage of completed orders is the metric that'll show the effect.

Make sure to also note such vital points as:

  • what the problem and solution are;
  • what are the benefits or the expected impact/successful outcome;
  • which user group is affected;
  • what are the risks;
  • what kind of experiments can help test the hypothesis;
  • what can measure whether you were right or wrong.

Step 2: Ensure the Connection Is Specific and Logical

Mind that generic connections that lack specifics will get you nowhere. So if you're thinking about how to word a hypothesis statement, make sure that the cause and effect include clear reasons and a logical dependency .

Think about what can be the precise and link showing why A affects B. In our checkout example, it could be: fewer steps in the checkout and the removed excessive fields will speed up the process, help avoid confusion, irritate users less, and lead to more completed orders. That's much more explicit than just stating the fact that the checkout needs to be changed to get more completed orders.

Step 3: Decide on the Data You'll Collect

Certainly, multiple things can be used to measure the effect. Therefore, you need to choose the optimal metrics and validation criteria that'll best envision if you're moving in the right direction.

If you need a tip on how to create hypothesis statements that won't result in a waste of time, try to avoid vagueness and be as specific as you can when selecting what can best measure and assess the results of your hypothesis test. The criteria must be measurable and tied to the hypotheses . This can be a realistic percentage or number (say, you expect a 15% increase in completed orders or 2x fewer cart abandonment cases during the checkout phase).

Once again, if you're not realistic, then you might end up misinterpreting the results. Remember that sometimes an increase that's even as little as 2% can make a huge difference, so why make 50% the merit if it's not achievable in the first place?

Step 4: Settle on the Sequence

It's quite common that you'll end up with multiple product hypotheses. Some are more important than others, of course, and some will require more effort and input.

Therefore, just as with the features on your product development roadmap , prioritize your hypotheses according to their impact and importance. Then, group and order them, especially if the results of some hypotheses influence others on your list.

Product Hypothesis Examples

To demonstrate how to formulate your assumptions clearly, here are several more apart from the example of a hypothesis statement given above:

  • Adding a wishlist feature to the cart with the possibility to send a gift hint to friends via email will increase the likelihood of making a sale and bring in additional sign-ups.
  • Placing a limited-time promo code banner stripe on the home page will increase the number of sales in March.
  • Moving up the call to action element on the landing page and changing the button text will increase the click-through rate twice.
  • By highlighting a new way to use the product, we'll target a niche customer segment (i.e., single parents under 30) and acquire 5% more leads. 

hypothesis driven development example

How to Validate Hypothesis Statements: The Process Explained

There are multiple options when it comes to validating hypothesis statements. To get appropriate results, you have to come up with the right experiment that'll help you test the hypothesis. You'll need a control group or people who represent your target audience segments or groups to participate (otherwise, your results might not be accurate).

‍ What can serve as the experiment you may run? Experiments may take tons of different forms, and you'll need to choose the one that clicks best with your hypothesis goals (and your available resources, of course). The same goes for how long you'll have to carry out the test (say, a time period of two months or as little as two weeks). Here are several to get you started.

Experiments for product hypothesis validation

Feedback and User Testing

Talking to users, potential customers, or members of your own online startup community can be another way to test your hypotheses. You may use surveys, questionnaires, or opt for more extensive interviews to validate hypothesis statements and find out what people think. This assumption validation approach involves your existing or potential users and might require some additional time, but can bring you many insights.

Conduct A/B or Multivariate Tests

One of the experiments you may develop involves making more than one version of an element or page to see which option resonates with the users more. As such, you can have a call to action block with different wording or play around with the colors, imagery, visuals, and other things.

To run such split experiments, you can apply tools like VWO that allows to easily construct alternative designs and split what your users see (e.g., one half of the users will see version one, while the other half will see version two). You can track various metrics and apply heatmaps, click maps, and screen recordings to learn more about user response and behavior. Mind, though, that the key to such tests is to get as many users as you can give the tests time. Don't jump to conclusions too soon or if very few people participated in your experiment.

Build Prototypes and Fake Doors

Demos and clickable prototypes can be a great way to save time and money on costly feature or product development. A prototype also allows you to refine the design. However, they can also serve as experiments for validating hypotheses, collecting data, and getting feedback.

For instance, if you have a new feature in mind and want to ensure there is interest, you can utilize such MVP types as fake doors . Make a short demo recording of the feature and place it on your landing page to track interest or test how many people sign up.

Usability Testing

Similarly, you can run experiments to observe how users interact with the feature, page, product, etc. Usually, such experiments are held on prototype testing platforms with a focus group representing your target visitors. By showing a prototype or early version of the design to users, you can view how people use the solution, where they face problems, or what they don't understand. This may be very helpful if you have hypotheses regarding redesigns and user experience improvements before you move on from prototype to MVP development.

You can even take it a few steps further and build a barebone feature version that people can really interact with, yet you'll be the one behind the curtain to make it happen. There were many MVP examples when companies applied Wizard of Oz or concierge MVPs to validate their hypotheses.

Or you can actually develop some functionality but release it for only a limited number of people to see. This is referred to as a feature flag , which can show really specific results but is effort-intensive. 

hypothesis driven development example

What Comes After Hypothesis Validation?

Analysis is what you move on to once you've run the experiment. This is the time to review the collected data, metrics, and feedback to validate (or invalidate) the hypothesis.

You have to evaluate the experiment's results to determine whether your product hypotheses were valid or not. For example, if you were testing two versions of an element design, color scheme, or copy, look into which one performed best.

It is crucial to be certain that you have enough data to draw conclusions, though, and that it's accurate and unbiased . Because if you don't, this may be a sign that your experiment needs to be run for some additional time, be altered, or held once again. You won't want to make a solid decision based on uncertain or misleading results, right?

What happens after hypothesis validation

  • If the hypothesis was supported , proceed to making corresponding changes (such as implementing a new feature, changing the design, rephrasing your copy, etc.). Remember that your aim was to learn and iterate to improve.
  • If your hypothesis was proven false , think of it as a valuable learning experience. The main goal is to learn from the results and be able to adjust your processes accordingly. Dig deep to find out what went wrong, look for patterns and things that may have skewed the results. But if all signs show that you were wrong with your hypothesis, accept this outcome as a fact, and move on. This can help you make conclusions on how to better formulate your product hypotheses next time. Don't be too judgemental, though, as a failed experiment might only mean that you need to improve the current hypothesis, revise it, or create a new one based on the results of this experiment, and run the process once more.

On another note, make sure to record your hypotheses and experiment results . Some companies use CRMs to jot down the key findings, while others use something as simple as Google Docs. Either way, this can be your single source of truth that can help you avoid running the same experiments or allow you to compare results over time.

Have doubts about how to bring your product to life?

Upsilon's team of pros can help you build a product most optimally.

Final Thoughts on Product Hypotheses

The hypothesis-driven approach in product development is a great way to avoid uncalled-for risks and pricey mistakes. You can back up your assumptions with facts, observe your target audience's reactions, and be more certain that this move will deliver value.

However, this only makes sense if the validation of hypothesis statements is backed by relevant data that'll allow you to determine whether the hypothesis is valid or not. By doing so, you can be certain that you're developing and testing hypotheses to accelerate your product management and avoiding decisions based on guesswork.

Certainly, a failed experiment may bring you just as much knowledge and findings as one that succeeds. Teams have to learn from their mistakes, boost their hypothesis generation and testing knowledge, and make improvements according to the results of their experiments. This is an ongoing process, of course, as no product can grow if it isn't iterated and improved.

If you're only planning to or are currently building a product, Upsilon can lend you a helping hand. Our team has years of experience providing product development services for growth-stage startups and building MVPs for early-stage businesses , so you can use our expertise and knowledge to dodge many mistakes. Don't be shy to contact us to discuss your needs! 

hypothesis driven development example

How to Conduct a Product Experiment: Tips, Tools, and Process

How to Build an AI App: The Ultimate Guide

How to Build an AI App: The Ultimate Guide

Best Startup Podcasts to Grow and Inspire Your Business

Best Startup Podcasts to Grow and Inspire Your Business

Never miss an update.

hypothesis driven development example

 FourWeekMBA

The Leading Source of Insights On Business Model Strategy & Tech Business Models

experiment-driven-development

Experiment-Driven Development In A Nutshell

Test-Driven Development (TDD) and Behavior-Driven Development (BDD) are popular agile development techniques. However, they don’t measure application usage or provide guidance on gaining feedback from customers. Experiment-Driven Development (EDD) is a scientific, fact-based approach to software development using agile principles.

Table of Contents

Understanding Experiment-Driven Development

While TDD and BDD help developers enhance code quality and ensure that it behaves according to spec, EDD helps identify the features that should be developed. In other words, what will become the spec.

EDD is driven by split A/B testing, where a baseline (control) sample is compared to several single-variable samples to determine which of the two choices improves response rates. 

This form of feedback collection avoids the need to conduct user surveys, which are often time-consuming for both parties and can be prone to bias.

Implementing Experiment-Driven Development

To implement EDD, it is a matter of following these four steps:

  • Start with a hypothesis

Instead of beginning with a user story, the project team starts by defining a hypothesis related to customers, problems, solutions, value, or growth .

For example, a growth hypothesis may be “ A virtual shoe fitting station in every store will increase shoe sales by 30%. ” 

  • Identify the experiment

In the second step, take the highest-priority hypothesis and define the smallest experiment that will prove or disprove it.

The shoe store may decide to install a virtual fitting station in five stores to begin with and measure the impact on sales.

  • Run the experiment

This may include creating a minimum viable product (MVP) and then measuring progress based on validated learning from the end-user.

minimum-viable-product

Here, many businesses choose to run experiments based on the Build/Measure/Learn (MVPe) loop. 

product-market-fit

For example, what are the observations?

How were the validated learnings used? Would more time spent on planning have helped?

Based on the results, the team may choose to pivot to a new hypothesis.

Alternatively, they may choose to persevere with the current hypothesis or discard it entirely and move to the next one.

Experiment-Driven Development Benefits

When a business incorporates EDD to complement an existing approach such as TDD or BDD, it can realize several benefits.

These include:

EDD allows project teams to ask and answer questions in a structured, measurable process.

Since ideas are validated by hypotheses, teams also avoid the testing of ideas simply to validate individual egos or hunches. 

  • Versatility

Although its scientific foundations may suggest otherwise, Experiment-Driven Development can be used across any business in any industry.

It is not specifically designed for use by R&D teams. 

  • Objectivity and efficiency

All agile methodologies dictate that value to the end-user is the primary goal.

However, the hypothesis-driven approach of EDD forces teams to define value through validated learning and not assumption alone.

Efficiency is also increased by building an MVP instead of focusing on superfluous features that provide little benefit to the end-user.

Case Studies

E-commerce platform: optimizing product recommendations.

Challenge: An e-commerce platform wants to improve its product recommendation engine to boost sales and enhance user engagement.

Application of EDD:

  • Hypothesis: “Personalized product recommendations based on user browsing history will increase the average order value by 20%.”
  • Identify the Experiment: The platform introduces personalized product recommendations for a subset of users while the rest continue to see the old recommendations. Data on order values is collected for both groups.
  • Run the Experiment: An MVP of the new recommendation system is implemented for the selected users. The system tracks user interactions and purchase behavior, measuring the impact on the average order value.
  • Debrief: After a defined period, the data is analyzed. If the experiment group shows a significant increase in the average order value, the hypothesis is validated, and the new recommendation system is rolled out to all users. If not, the platform may pivot to a different hypothesis, such as refining the recommendation algorithm.

Outcome: EDD helps the e-commerce platform make data-driven decisions about feature development. If the hypothesis is validated, it can lead to increased sales and customer satisfaction.

Mobile App Development: User Onboarding Flow

Challenge: A mobile app developer wants to improve the user onboarding experience to reduce drop-off rates during registration.

  • Hypothesis: “Simplifying the user registration process to two steps will reduce the drop-off rate by 30%.”
  • Identify the Experiment: The developer creates an MVP that streamlines the registration process to two steps. A control group experiences the original registration flow, while another group uses the simplified flow. User drop-off data is collected for both groups.
  • Run the Experiment: Users in both groups are tracked during the registration process. The developer monitors how many users complete the registration and how many drop off at each step.
  • Debrief: After the experiment, the developer reviews the data. If the simplified flow shows a 30% or greater reduction in drop-off rates, the hypothesis is validated, and the new onboarding process is implemented. If not, the developer may iterate on the hypothesis or try a different approach.

Outcome: EDD enables the mobile app developer to make informed decisions about user onboarding. If successful, the simplified onboarding flow can lead to increased user retention.

SaaS Platform: Feature Adoption

Challenge: A SaaS platform wants to improve the adoption of a new feature among its existing customers.

  • Hypothesis: “Introducing a step-by-step tutorial for the new feature will increase its adoption rate by 25% among existing customers.”
  • Identify the Experiment: The platform introduces an interactive tutorial for the new feature. Half of the existing customers are exposed to the tutorial when they log in, while the other half does not see it. User interaction and feature adoption data are collected.
  • Run the Experiment: Users’ interactions with the tutorial and their subsequent adoption of the feature are tracked. The platform measures how many users from each group actively use the new feature.
  • Debrief: After the experiment, the platform analyzes the data. If the group exposed to the tutorial shows a 25% or higher increase in feature adoption, the hypothesis is validated, and the tutorial is implemented for all existing customers. If not, the platform may refine the tutorial or explore alternative strategies.

Outcome: EDD helps the SaaS platform make evidence-based decisions to drive feature adoption among its customer base.

Key takeaways

  • Experiment-Driven Development is a hypothesis-driven approach to software development that is based on fact.
  • Experiment-Driven Development incorporates A/B testing, where a baseline sample is compared to a single-variable sample to determine which sample delivers a better outcome. This allows the business to formulate, test, and evaluate hypotheses.
  • Experiment-Driven Development complements approaches such as TDD and BDD, but it does not replace them. EDD can be used in any industry or department as an efficient and (most importantly) objective means of agile software development.

Key Highlights

  • Understanding Experiment-Driven Development (EDD): EDD is an agile development approach rooted in scientific methods. While TDD and BDD focus on code quality and behavior, EDD helps identify features by testing hypotheses with A/B split testing.
  • Hypothesis: Start with a hypothesis related to customers, problems, solutions, value, or growth .
  • Identify Experiment: Define a small experiment to prove or disprove the hypothesis. For instance, testing a virtual shoe fitting station’s impact on sales.
  • Run Experiment: Create an MVP, use validated learning from end-users, and apply the Build/Measure/Learn loop.
  • Debrief: Analyze observations, learnings, and results. Decide to pivot, persevere, or move to a new hypothesis.
  • Structure: EDD provides a structured process for asking and answering questions based on validated hypotheses.
  • Versatility: EDD is adaptable across various industries and departments, not just R&D.
  • Objectivity and Efficiency: EDD ensures value through validated learning, avoids assumptions, and prioritizes efficient MVPs over unnecessary features.
  • EDD is a scientific approach to software development.
  • It uses A/B testing for hypothesis validation.
  • EDD complements TDD and BDD, enhancing agility and objectivity.
  • EDD is versatile and applicable to various industries and departments.

What are the steps to implement experiment-driven development?

The steps to implement experiment-driven development are:

What are the benefits of experiment-driven development?

The benefits of experiment-driven development are:

Read Also: Business Models Guide , Sumo Logic Business Model , Snowflake

Innovation ,  Agile Methodology ,  Lean Startup ,  Business Model

Innovation ,  Agile Methodology ,  Lean Startup ,  Business Model Innovation ,  Project Management .

Connected Agile & Lean Frameworks

aiops

Agile Methodology

agile-methodology

Agile Program Management

agile-program-management

Agile Project Management

agile-project-management

Agile Modeling

agile-modeling

Agile Business Analysis

agile-business-analysis

Agile Leadership

agile-leadership

Andon System

andon-system

Bimodal Portfolio Management

bimodal-portfolio-management

Business Innovation Matrix

business-innovation

Business Model Innovation

business-model-innovation

Constructive Disruption

constructive-disruption

Continuous Innovation

continuous-innovation

Design Sprint

design-sprint

Design Thinking

design-thinking

Dual Track Agile

dual-track-agile

eXtreme Programming

extreme-programming

Feature-Driven Development

feature-driven-development

GIST Planning

gist-planning

ICE Scoring

ice-scoring-model

Innovation Funnel

innovation-funnel

Innovation Matrix

types-of-innovation

Innovation Theory

innovation-theory

Lean vs. Agile

lean-methodology-vs-agile

Lean Startup

startup-company

Minimum Viable Product

leaner-mvp

Rational Unified Process

rational-unified-process

Rapid Application Development

rapid-application-development

Retrospective Analysis

retrospective-analysis

Scaled Agile

scaled-agile-lean-development

Spotify Model

spotify-model

Test-Driven Development

test-driven-development

Scrum Anti-Patterns

scrum-anti-patterns

Scrum At Scale

scrum-at-scale

Stretch Objectives

stretch-objectives

Toyota Production System

toyota-production-system

Total Quality Management

total-quality-management

Read Also:  Continuous Innovation ,  Agile Methodology ,  Lean Startup ,  Business Model Innovation ,  Project Management .

Read Next: Agile Methodology , Lean Methodology , Agile Project Management , Scrum , Kanban , Six Sigma .

Main Guides:

  • Business Models
  • Business Strategy
  • Business Development
  • Distribution Channels
  • Marketing Strategy
  • Platform Business Models
  • Network Effects

Main Case Studies:

  • Amazon Business Model
  • Apple Mission Statement
  • Nike Mission Statement
  • Amazon Mission Statement
  • Apple Distribution

More Resources

behavior-driven-development

About The Author

' src=

Gennaro Cuofano

Discover more from fourweekmba.

Subscribe now to keep reading and get access to the full archive.

Type your email…

Continue reading

  • 70+ Business Models
  • Airbnb Business Model
  • Apple Business Model
  • Google Business Model
  • Facebook [Meta] Business Model
  • Microsoft Business Model
  • Netflix Business Model
  • Uber Business Model

Llewellyn  E. van Zyl Ph.D.

Data-Driven Leadership Development

How to harness data to build leadership development interventions that work..

Posted May 31, 2024 | Reviewed by Davia Sills

  • Data-driven leadership development uses advanced analytics to create targeted, personalized interventions.
  • Develop a validated leadership capability model to identify the core capacities predicting performance.
  • Build a roadmap that links job characteristics and individual capabilities to hard performance metrics.
  • Create hyper-personalized development plans to address specific leadership needs.

Leadership is the cornerstone of organizational success, with research showing that about 70 percent of the variance in employee engagement is attributable to the quality of leadership. Effective leadership can stave off high turnover rates and mitigate the negative impacts of significant organizational changes on motivation and performance. Yet, despite the importance of leadership, many organizations still rely on intuition rather than empirical evidence to guide their leadership development strategies. This reliance on gut feeling undermines the effectiveness of leadership initiatives and hampers overall organizational performance.

The Problem With Traditional Leadership Development

Organizations invest billions of dollars annually in leadership development initiatives, yet many fail to link the leadership behaviors they aim to develop with core performance metrics. Traditional approaches often involve generic, one-size-fits-all programs that yield minimal financial returns. These programs fail to show the true impact of leadership development on organizational success, leaving leaders unprepared for the unique challenges they face.

However, this status quo is not unchangeable. By embracing a data-driven approach, organizations can bridge the gap between leadership behaviors and performance metrics, unlocking new pathways to success. One way is through data-driven leadership development interventions. Data-driven leadership development uses real-time data and advanced analytics to identify the specific leadership capabilities that predict performance and then develop these through hyper-personalized interventions.

What Is Data-Driven Leadership Development?

Data-driven leadership development involves leveraging empirical data to inform your leadership assessment and development strategies. This approach identifies the core competencies, behaviors, abilities, and mindsets that predict organizational success. It allows organizations to make informed decisions about their developmental interventions based on objective reality rather than subjective opinions.

Moreover, data-driven approaches provide real-time intelligence on skills availability and performance drivers, helping organizations anticipate future leadership needs. By aligning leadership development with organizational goals, companies can create hyper-personalized learning journeys tailored to individual and organizational needs.

The Transformative Potential of Data-Driven Approaches

Data-driven leadership development is not just a shift in strategy; it represents a paradigmatic evolution in talent management. Here are some key benefits:

  • Informed Decision-Making : Data provides a roadmap for leadership development, allowing organizations to focus on behaviors and competencies that directly impact performance.
  • Personalized Interventions: By understanding the unique strengths and weaknesses of leaders, organizations can design targeted development plans that maximize impact.
  • Future-Proofing Leadership: Data helps organizations anticipate and address future leadership needs, ensuring they have the right talent to navigate upcoming challenges.
  • Continuous Improvement: Real-time feedback on leadership initiatives facilitates continuous evaluation and refinement, ensuring development efforts remain agile and responsive.
  • Maximized ROI: Data-driven approaches ensure resources are allocated effectively, enhancing the return on investment in leadership development.

Three Steps to Implementing Data-Driven Leadership Development

At its core, data-driven leadership development involves three critical steps

1. Develop and Validate a Data-Driven Leadership Capability Model: The first step involves understanding what constitutes excellent leadership within your organization. Construct a leadership capability profile using data from diverse sources to outline the key competencies, experiences, abilities, and values required for effective leadership. This profile should be aligned with organizational goals and validated through empirical data.

Source: Llewellyn van Zyl

2. Build a Data-Driven Leadership Development Roadmap: Construct a predictive process model linking individual capabilities to performance metrics. This involves measuring job demands, resources, motivational factors, and key performance indicators within the organization. Assess leaders against the capability profile and analyze how these factors influence performance. This model will help identify the exact leadership capabilities to develop first to improve performance.

Source: Llewellyn van Zyl

3. Develop Hyper-Personalized Leadership Development Interventions: Use the insights from the capability model and the roadmap to create personalized development plans for each leader. These plans should address specific areas for improvement and align with organizational goals. Incorporate a variety of learning resources and methods to cater to different learning styles and ensure ongoing evaluation and feedback to continuously refine the development process.

hypothesis driven development example

Embracing data-driven leadership development is a crucial step for organizations aiming to enhance their performance and stay competitive. By constructing empirically validated leadership capability models and integrating these insights with theoretical frameworks, companies can design hyper-personalized development plans with real-world impact.

Organizations must take action to adopt data-driven approaches, ensuring their leadership development efforts are not only effective but also aligned with their strategic objectives. This shift will unlock the full potential of their leadership talent, driving sustained organizational success.

The call to action is clear: It's time to move away from intuition-based leadership development and embrace a data-driven approach. Doing so will not only improve leadership quality but also significantly enhance organizational performance and resilience in the face of future challenges.

Stander, E., & Van Zyl, L.E. (2019). The Talent Development Centre (TDC) as an integrated positive psychological leadership development and talent analytics framework . In L.E. van Zyl & S. Rothmann (Eds.), Positive Psychological Intervention Design and Protocols for Multi-cultural Contexts. Cham, Switzerland: Springer.

Van Zyl, L. E., Dik, B. J., Donaldson, S. I., Klibert, J. J., Di Blasi, Z., Van Wingerden, J., & Salanova, M. (2024). Positive organisational psychology 2.0: Embracing the technological revolution . The Journal of Positive Psychology, 19 (4), 699-711.

Hughes, A. (2021). Leadership Development Programs: A Comparative Analysis of Indian and Canadian Organizations . International Journal of Transcontinental Discoveries, ISSN: 3006-628X , 8 (1), 21-27.

Llewellyn  E. van Zyl Ph.D.

Llewellyn E. van Zyl, Ph.D. , is a professor of positive psychology at the Optentia Research Unit within the North-West University and is attached to the Eindhoven University of Technology.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • International
  • New Zealand
  • South Africa
  • Switzerland
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

May 2024 magazine cover

At any moment, someone’s aggravating behavior or our own bad luck can set us off on an emotional spiral that threatens to derail our entire day. Here’s how we can face our triggers with less reactivity so that we can get on with our lives.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

IMAGES

  1. Hypothesis-driven Development

    hypothesis driven development example

  2. Hypothesis-driven Development

    hypothesis driven development example

  3. Hypothesis Driven Developmenpt

    hypothesis driven development example

  4. Data-driven hypothesis development

    hypothesis driven development example

  5. Hypothesis driven development

    hypothesis driven development example

  6. Hypothesis Development: Concept, Characteristics, Null and Alternate Hypotheses with Examples

    hypothesis driven development example

VIDEO

  1. Test Driven Development example

  2. Two-Sample Hypothesis Testing: Dependent Sample

  3. Day-2, Hypothesis Development and Testing

  4. Step10 Hypothesis Driven Design Cindy Alvarez

  5. Test Driven Development example

  6. This is how you THINK for hypothesis-driven for success! #hypothesis #photography

COMMENTS

  1. Guide for Hypothesis-Driven Development: How to Form a List of

    The hypothesis-driven development management cycle begins with formulating a hypothesis according to the "if" and "then" principles. In the second stage, it is necessary to carry out several works to launch the experiment (Action), then collect data for a given period (Data), and at the end, make an unambiguous conclusion about whether ...

  2. Why hypothesis-driven development is key to DevOps

    Hypothesis-driven development is based on a series of experiments to validate or disprove a hypothesis in a complex problem domain where we have unknown-unknowns. We want to find viable ideas or fail fast. ... Example: We believe that users want to be able to select different themes because it will result in improved user satisfaction. We ...

  3. How to Implement Hypothesis-Driven Development

    Examples of Hypothesis-Driven Development user stories are; Business story. We Believe That increasing the size of hotel images on the booking page. Will Result In improved customer engagement and conversion. We Will Know We Have Succeeded When we see a 5% increase in customers who review hotel images who then proceed to book in 48 hours.

  4. How to apply hypothesis-driven development

    Hypothesis-Driven Development (HDD) applies the scientific method to software engineering, fostering a culture of experimentation and learning. Essentially, it involves forming a hypothesis about a feature's impact, testing it in a real-world scenario, and using the results to guide further development. This method helps teams move from "we ...

  5. Understanding Hypothesis-Driven Development in Software Development

    Hypothesis-Driven Development (HDD) is a systematic and iterative approach that leverages the scientific method to inform software development decisions. ... For example, let's say a development team wants to improve the user interface of their application. They might hypothesize that by simplifying the navigation menu, users will find it ...

  6. How McKinsey uses Hypotheses in Business & Strategy by McKinsey Alum

    The first step in being hypothesis-driven is to focus on the highest potential ideas and theories of how to solve a problem or realize an opportunity. Let's go over an example of being hypothesis-driven. Let's say you own a website, and you brainstorm ten ideas to improve web traffic, but you don't have the budget to execute all ten ideas.

  7. Hypothesis-Driven Development

    For example, a hypothesis might propose that introducing a real-time chat feature will lead to increased user engagement by facilitating instant communication. The Process. The process of Hypothesis-Driven Development involves a series of steps. Initially, development teams formulate clear and specific hypotheses based on the goals of the ...

  8. Learning is fun: experiments with Hypothesis-Driven Development

    Hypothesis-driven development is an approach designed to validate ideas rather than ship features. Learn Recently the team pivoted to the proposed design of a new protocol.

  9. Hypothesis Driven Development with example

    hypothesis-driven-development-with-example Who is Johan Normén? Johan Normén is 37 years old, work as a speaker, mentor, team leader, agile coach, and senior .net developer at Softhouse in ...

  10. Hypothesis-driven development

    Hypothesis-driven development Technical decision making Scroll down. Posted: 26 August 2020. Posted in: ... Here are some examples of the framework in different situations. If your team is starting a new development of a new product, engineers can build scaled-down, hacky prototypes to test key user behaviours before the team invests in a ...

  11. Bernat Puig Camps

    Hypothesis-driven development is not new. Some teams even use it for projects unrelated to AI, employing it to manage uncertainty 2.The distinction lies in the type of uncertainty addressed: rather than focusing on whether the software functions correctly, they're more concerned with how certain product features impact outcomes (like whether a new landing page boost conversions).

  12. Hypothesis-driven product management

    Yes, hypothesis-driven practices are ideal for building new features. Since the goal is to test the validity of each hypothesis, the uncertainty around the product development process is significantly reduced. In a way, hypothesis testing helps you make better decisions about your product lifecycle management.

  13. Hypothesis-driven Development

    Success Story of Hypothesis-driven Development . A great example of a company that successfully used HDD while developing its main offering is Dropbox, the popular remote file-sharing service. Today, Dropbox is used by over 45 million users. However, when they initially launched, they had several competitors in the same domain.

  14. Hypothesis-driven approach: Problem solving in the context of global

    The hypothesis-driven approach is a problem-solving method that is necessary at WHO because the environment around us is changing rapidly. WHO needs a new way of problem-solving to process large amounts of information from different fields and deliver quick, tailored recommendations to meet the needs of Member States. ...

  15. Reference for 'Hypothesis-Driven Development (Book)

    But waste is not inevitable anymore. Hypothesis-Driven Development (HDD) is an emerging approach to digital product management for both business people and engineers. It emphasizes rigorous, continuous experimentation as a way to both minimize waste and focus teams' creative capabilities in directions that drive growth and innovation.

  16. Hypothesis-Driven Development

    Hypothesis-driven decisions. Specifically, you need to shift your teammates focus from their natural tendency to focus on their own output to focusing out user outcomes. ... Driving to Value with Your Persona & JTBD Hypothesis • 2 minutes; Example Personas and Jobs-to-be-Done ... Applying the 7 Steps Model to Hypothesis-Driven Development ...

  17. Customer-Driven Engineering. Driving customer empathy into ...

    Customer-Driven Engineering (CDE) is a culture and product development process that has been cultivated within the Developer Division (DevDiv) at Microsoft; the division responsible for creating…

  18. PDF Hypothesis-Driven Development

    Hypothesis-Driven Development Assignment. Also, as you go through these, you may want to delete the Intro Note and Instructions sections in your copy. (In Google Docs, right-click (or two- ... curated video of an example analysis and Front end user experience. (the actual analytics were done in excel but presented in

  19. Hypothesis-Driven Entrepreneurship: The Lean Startup

    Firms that follow a hypothesis-driven approach to evaluating entrepreneurial opportunity are called "lean startups." Entrepreneurs in these startups translate their vision into falsifiable business model hypotheses, then test the hypotheses using a series of "minimum viable products," each of which represents the smallest set of features/activities needed to rigorously validate a concept.

  20. Hypothesis Driven Development A Complete Guide

    This Hypothesis Driven Development All-Inclusive Self-Assessment enables You to be that person. All the tools you need to an in-depth Hypothesis Driven Development Self-Assessment. Featuring 2220 new and updated case-based questions, organized into seven core areas of process design, this Self-Assessment will help you identify areas in which ...

  21. Research Hypothesis: Definition, Types, Examples and Quick Tips

    3. Simple hypothesis. A simple hypothesis is a statement made to reflect the relation between exactly two variables. One independent and one dependent. Consider the example, "Smoking is a prominent cause of lung cancer." The dependent variable, lung cancer, is dependent on the independent variable, smoking. 4.

  22. Product Hypotheses: How to Generate and Validate Them

    A hypothesis in product development and product management is a statement or assumption about the product, planned feature, market, or customer (e.g., their needs, ... Finally, the teams can make conclusions that could lead to data-driven decisions. For example, they can make corresponding changes or roll back a step.

  23. Experiment-Driven Development In A Nutshell

    Experiment-Driven Development is a hypothesis-driven approach to software development that is based on fact. Experiment-Driven Development incorporates A/B testing, where a baseline sample is compared to a single-variable sample to determine which sample delivers a better outcome. This allows the business to formulate, test, and evaluate ...

  24. Data-Driven Leadership Development

    Three Steps to Implementing Data-Driven Leadership Development. At its core, data-driven leadership development involves three critical steps. 1. Develop and Validate a Data-Driven Leadership ...