How to validate new products and new features with Landing Page Experiments
Landing page experiments can rapidly provide you and your team with real data about your new product ideas – whether it is an entirely new product or a new product feature. Landing page experiments let you show your product feature to people that might use it before it is ready and see how they respond.
Getting behavioural data like this, instead of or in addition to user interviews lets you see if people really mean what they are telling you. They’re a great tool invented by ‘growth hackers’ that product teams need to steal for customer development.
Landing page experiments have allowed us to rapidly qualify and disqualify new features, new products and new approaches to marketing.
For example, we recently ran a landing page experiment very early in our thinking about a new feature and receive a surprising result that encouraged us to double down on a feature. In the email we sent about the new feature we had a primary call to action to “learn more” that linked to our landing page. We thought we would need more explanation for people to be interested. However, there was a small, secondary call to action to “sign up now” hiding further down the email as part of our standard template. We didn’t even intentionally include it in this email but, surprisingly, it received more clicks then the primary call to action. This early, strong positive feedback for the new feature has given us strong justification to put more effort into pursuing it.
In this post I’m going to talk through what a landing page experiment is, what it involves and how to run one and some lessons learnt.
What is a landing page experiment?
A landing page experiment usually involves components:
- An Experiment Overview – state your hypothesis, your expected results etc.
- A Landing Page – the physical page on the web that showcases the new product or feature. It must have some kind of call to action
- Analytics Tool(s) – tools to help you collect data on the experiment you are performing.
- A Traffic Generator – a tool or technique to get people visiting the landing page.
- Email Campaign (optional) – sending one or more emails to a list to drive people to the landing page.
- Ads (optional) – paids ads to quickly drive independent people from your target market to view your ad.
Bonus: If you’re serious about running experiments you also need to have an Experiment Log where you keep track of all of your experiments, their status, their results and the lessons they’ve provided. It is through disciplined iteration that the best learning comes.
Let’s go into more detail about each of the components.
#1: An Experiment Overview
An Experiment Overview can be a formal or semi-formal document, it can also be just a line in a spreadsheet. We mix it up depending on the complexity of the experiment, the steps involved and the thought required.
The essential elements of an Experiment Overview are:
- A hypothesis
- A clear, measurable statement of what you expect the results to be. It should be so black and white your grandmother could review your notes and determine the success.
- A plan for conducting the experiment
- A timeframe
That’s it. Pretty simple.
#2: A Landing Page
A Landing Page is the central element of a Landing Page Experiment (yes, surprising isn’t it).
There is plenty of information available to help you design good landing pages so I won’t repeat it here. Instead, I’ll focus on the nuances of using Landing Pages for customer development rather than acquiring users as most of the information available focuses on acquiring users.
Even though your feature or product isn’t real yet, the Landing Page is where you showcase the product or feature as though it were a real one and (attempt to) compel people to take action. The beauty of this is you are testing how the real thing will perform and how people will react. It is as close to real as you can get, mainly because it is real, which means you’re getting the best data you can.
You want to project forward and imagine exactly what the final landing page would look like once your product is real and in the market.
Your call to action ideally will match what the real one would be but you may want to vary it. For instance, instead of asking people to “sign up” you may want to ask people to “enter their email” if your product isn’t ready to take sign-ups. Getting emails may help you then follow up and interview the people that showed interest. Don’t be afraid to push this though, asking for a pre-payment might seem extreme when the feature or product doesn’t exist but it will genuinely test how real people’s reaction to your product will be once the product is real.
You can design the Landing Page with pen and paper, a wireframing tool like Balsamiq or Invision, or you can use your favourite landing page tool (e.g. unbounce, WordPress, Wix, Hubspot or InstaPage).
#3: Analytics Tool(s)
Every good experiment is measured. You need Analytics Tools as part of your experiment to measure the variables you are focused on but it also helps to measure other behaviours especially if it comes at little to no additional cost.
Analytics Tools like Mixpanel, Amplitude, Hotjar, Google Analytics are the tools of the trade. Additionally, many landing page platforms include their own analytics as do email marketing platforms. Personally, I’ve found that you need to use a combination to get all the insights you want. Sometimes you even need to add some custom events to really understand what is happening.
#4: Traffic Generators
Traffic Generators are tools or activities that will bring your target market to your landing pages. This could be:
- Ads (e.g. Google, Facebook, LinkedIn)
- Manually sent emails
- Emails sent to a list using MailChimp or Campaign Monitor.
- LinkedIn Messages
How to run a Landing Page Campaign
Now that you’re familiar with the tools you’ll need lets talk through how you execute a Landing Page Experiment.
You may then want to iterate over these steps a few times. It is hard to predict what will happen though as it is an experiment.
Write your Experiment Overview here.
Get familiar with the tools and techniques you will use. You may do a little bit of setup here or your might do it under “Run”. Use your judgement. Sometimes playing around with ads or the landing page will be necessary to help you plan the experiment.
Don’t turn this into an onerous drawn out process though. Keep it short and simple.
Get everything setup and then run your experiment.
Get your ads designed, get your landing pages designed and live.
Hit go on your ads and emails so you can start generating traffic.
Now sit back and watch the results.
This is where you review what happened with your experiment.
Start by looking through the data. Don’t rush this bit, allow extra time to immerse yourself as this will allow your brain to process what happened and possibly uncover interesting insights.
I often see people trying to rush this. I’ve even seen people trying to delegate it to an administrator or junior member of the team, fight this urge. Time spent in your data, reviewing what happened is where the gold is.
Now write up your results in your Experiment Log or at the end of your original Engagement Overview. It is great to refer back to and the act of writing the results up forces you to think through the implications.
Here is a quick dump of lessons learnt to help you avoid some mistakes:
- Be careful of early landing pages and how broad they are or achievable they are. If you go to broad you’ll get lots of interest but it may not be the narrow interest you are looking for. For example, if you ran an experiment like “want to save 50% of your operating costs with AI” everyone will say yes but it’s so broad that you may not learn much.
- Be conscious about what people you are going to listen to and focus on. You might get positive behaviour but from the wrong people (not your target). There might be something in this or it might be leading you down the wrong path.
- Don’t put too much weight on feedback from people close to you on your landing page – you need completely independent unbiased data.