SharpTheorySharpTheory

Category : A/B Test

By IT Manager

Overcome Diminishing Returns of A/B Testing

Starting a new A/B testing program on your website is an exciting time. Using your web analytics, you identify which pages of your website are underperforming. From there, you put on your proverbial thinking cap and develop ideas to improve page performance. Next, you formulate test ideas and use A/B testing to see what fixes perform better.

When you first start A/B testing, you’ll be overrun with opportunities for improvement and can quickly develop creative solutions. Easily identifiable issues are the easiest to solve, and when you fix glaring weaknesses, you may feel like an optimization all-star.

But what happens after you consume all the low-hanging fruit?

 

When A/B Testing Stops Working

After you conquer obvious problems and get some great wins, companies tend to develop a huge hunger for testing. However, as you A/B test more and your website improves, it becomes more and more challenging to improve upon your improvements. Your win rate decreases and you end up with too many zero-sum A/B tests.

Your web analytics was great early on because it  helped you figure out where you have problems. However, identifying problem areas isn’t the problem – it’s pinpointing the fixes. That is how an A/B testing program should evolve. It’s only as good as what you feed it.

 

Move Past the “What” and Get to the “Why”

Big data is not a Rosetta stone – it’s just a tool. Website analytics are archival and tell you what happened, but not why. Analytics aren’t answers but spawn questions.

While evolving your A/B testing program, supplemental information from your users can reveal why some pages underperform. You know “what” from the analytics, but “why” is largely based on your assumptions unless you tap another data source.

If you strategize from misinterpreted information, you’ll waste time and money.

Studying your users breaks through flawed assumptions.

 

Your Audience Has All the Answers

We are all too close to our products and services. We know too much and have too many biases. This creates blinders that makes it difficult for us to understand why issues exist and would could be done to improve them

Only your audience can tell you that, and User Testing is a very effective, low-cost way to get them to tell you.

User testing shows how your audience interacts with your website, why they do/don’t click, and why some areas of the site fail to engage. You get startlingly relevant insights from studies conducted with an audience that mirrors your ideal customer. These studies move you beyond assumptions that undermine your A/B testing program.

Because user testing observes and records the interaction between your website and its targeted audience, it provides the “why” that you desperately need so you can optimize, drive sales, and deepen customer satisfaction.

 

User Testing Studies Uncover Hidden Opportunities!

One user test can help you identify many things to fix and opportunities to purse. However one person is rarely a good sample size. Therefore, it’s a good idea to run 5 – 10 user test about one specific journey. This is called a User Testing Study.

By having 5-10 different people in your target audience go through a focused journey, you will get varied points of view that will allow you to develop trends. The strength of these trends will make it possible for you to prioritize what to fix now and test later.

The results of a well-conducted user testing study are almost always startling. You will find issues you never would have thought were issues. Your audience will outright give you some great A/B testing ideas, and through observation, you will develop new A/B test ideas that would have never occurred to you otherwise.

Simply stated, user testing studies supercharge your optimization program.

 

You Will Need Some Help

The average single user test will run between 20-30 minutes. That means that single study, with 10 participants, will produce between 200 – 300 minutes of video. This is a lot of video review, analyze, and synthesize.

It’s also important to document your study and build a plan around your learnings. These study reports should contain a summary of findings, trends, critical improvements and A/B test ideas that will help you and your organization plan and take action. It’s the equivalent of putting together a focused business performance report from extensive data and analysis.

Without this study report, you will just have a database full of videos that you’ll dread sifting through.

 

SharpTheory’s UX Learning Solution Can Help

At SharpTheory, we specialize in conducting user testing studies for companies. We offer a service called our UX Learning Solution, which includes a monthly user testing study that includes the following:

  • Analysis and synthesis of up to 300 minutes of user testing videos
  • Summary of the most interesting findings
  • Breakdown of strong and moderate trends identified
  • List of critical fixes that should be made immediately (no A/B testing required)
  • A/B test ideas that should be added to your testing roadmap
  • 3-5 minute highlight real of the most interesting parts of the study that can be easily shared

Every study has a money-back guarantee: if you don’t think a study is valuable, we’ll run another one for free or just give you a refund.

If you’d like to learn more about how it works, what it includes, and what it costs, here’s a short overview.

If you’re ready to start on this journey and supercharge your optimization program, please contact us to schedule an introductory chat.

By Scott Olivares

An introductory guide to the most popular types of A/B Tests

How many times have you heard someone use that dreaded blue button vs. yellow button example when explaining the value of A/B testing? Yes, it’s an easy-to-understand example, but it really undersells the power of A/B testing and isn’t very creative.

To add more flavored examplesto our idea bank, this post coversa few standard digital A/B testing experiments and their pros and cons from my perspective. I also welcome my fellow experimenters to comment and contribute with their own examples!

1. Complete redesign tests

I’m going to say it, and I’m not ashamed to say it: I REALLY don’t like site-wide redesigns. I think they often turn into creative expression exercises justified by light research, and they tend to fail. And even when extensive research is conducted, and really talented user experience professionals create them, they still tend to fail. If you’re curious about why they tend to fail or how to avoid failure, check out my other post: “What’s Missing from Your Website Redesign Plan?”

Now that I’ve gotten that out of my system, I’ll admit that sometimes, but not very often, they are necessary. In a redesign test, everything is fair game and can be changed. You can change, move, remove, and add whatever your heart desires in the name improving your digital experience.

Pros: Redesign tests are good for when you have something that you know is a bad experience and needs to be completely replaced. At that point, incremental optimization is not going to cut it. Sometimes what you need is to start over and not to optimize something that is bad. In other words, as my friend Mike King once told me, “you can shine a piece of shit every single day, but at the end of a month you’ll simply have a really shiny piece of shit.”

Cons: When you change too many things at once, it’s practically impossible to learn what helped or hurt your digital experience. You may have used a new hero image that increased conversion by 10%, but changed an important description that decreased conversion by 15%. Since you changed both at once, all you would know is that you had a decrease in conversion of 5%, which could cause you to disregard an awesome hero image. It’s a shotgun approach that I often compare to high-stakes gambling. Redesigns can be great when executed well, but they come with a lot of risk, are difficult to plan around, and you always give up a lot of learning. Use them sparingly.

Tip: If you’re going redesign your entire experience, make sure you maximize this opportunity by doing a lot of due diligence in understanding what your visitors want. Don’t just assume that you can think like a visitor. Have an experienced analyst dive into your web analytics to understand how visitors use your current site and develop some actionable insights. Conduct some usability testing. Go through common use cases. And most importantly, optimize while you slowly ramp your new experience – don’t ramp it to too many people until you know it converts at least as good as your old website.

2. Description tests

Words matter, A LOT. Changing what you say in a page title, photo caption, call to action, or product description can have dramatic effects with practically no creative assets needed.

Pros: Description tests are a great way to incrementally improve a page that you don’t think is that bad. Additionally, since most of these experiments involve testing some kind of HTML text, you’re unlikely to need a designer or heavy web development. They are also pretty easy to implement because changing text usually isn’t too technically complicated.

Cons: Usually, people don’t take the time to read on the Internet, and a common occurrence with description tests is that you may see no conversion difference. In order to make an impact, you need to make sure that you are testing descriptions that people are actually reading.

Tip: Look through your web analytics to identify any keywords that may drive significant traffic to your page or look through your own internal search from that page to identify any popular keywords. You are more likely to get visitors’ attention if you use keywords or phrases that you know people are interested in.

3. Promotion tests

In promotion-centric tests, you can experiment with different prices, promotions, and the way you position promotions to determine what visitors respond to and how much they respond.

Pros: Promotional tests can be very useful in determining an optimal price for your products and how to best position promotions. They give you the freedom to try promotions on a sample population and ensure that you offer the most effective one to the entire population.

Cons: Promotional tests where you offer discounts can be trickier to interpret. In most cases, larger discounts create higher conversion rates, but if your discounts are larger than your conversion increases, you may start eating into your profits. It takes a little more detailed monitoring and analysis to ensure that you are making a good business decision. In addition, you can really upset your potential customers if you’re not careful. Be wary of offering different pricing to different people. Some big companies have received a lot of bad PR for these types of tactics.

Tip: Try the same discounts to everyone, but positioning them differently in your test variations. You may find that your customers respond more to a percentage discount (e.g. 25% off) compared to a flat dollar discount (e.g. get $50 off $200). It’s also possible that visitors are insensitive to $9.99 off but are more likely to convert if you offer $10 off.

4. Image tests

Sometimes just using a different image on your page can make a big difference. If you’re a vacation-planning site, lifestyle images of people having fun may work better than a nice shot of the beach. On the other hand, if you’re selling furniture, visitors may want to see the piece of furniture by itself.

Pros: Imagery is something that people actually pay attention to, and sometimes the right image can really move the needle in right direction.

Cons: It’s easy to get lazy with these experiments and let your subjective judgment dictate the images you test. As with any test hypothesis, maximize the likelihood of a successful test by using research-based insight to determine your imagery.

Tip: Many times, less is more. If you have a page with 3 smaller images, try testing one bigger image that makes an impact. When there are too many things competing for your attention, nothing stands out. Also, there are plenty of simple tools out there such as EyeQuant or Attention Wizard that can help evaluate how your images may impact users’ ability to focus on your key value propositions and CTAs.

5. Design Tests

This is where you try to optimize by testing color, fonts types and sizes, shapes, or layout. These aren’t complete redesigns; rather, it’s changing certain aspects of your design.

Pros: Design-oriented tests not only help improve your conversion, they also allow you to test different design prototypes to ensure they don’t hurt conversion before you release them to the world. Remember that avoiding the implementation of something that hurts conversion is even more important than finding something that improves conversion.

Cons: Design-oriented tests usually require help from a designer, and depending on the complexity, a web developer. This often makes them more resource heavy, political, and slower to produce. Additionally, finding evidence to support that changing things such as button color or font styles will increase conversion may be difficult, so you’re sometimes left with creating less data-driven test variations.

6. Targeting tests

Many will say that targeting is not the same as testing. I agree, but you can test your targeting approach to ensure that you are being as effective as possible. For example, your targeting may be based on a referring keyword, but conversion rate may be higher if your targeting is based on visitors being Mac users.

Pros: Targeting tests can really help you capture hard-to-reach customers by fine-tuning your personalization tactics. These types of tests are the next level, once you have harvested all the low hanging fruit with your other testing.

Cons: Testing different targeting tactics is not as straightforward as testing visual items and requires a pretty deep understanding of your visitors. You will also need to invest in a tool that allows you to target because some of the more basic A/B testing offerings may not have this capability.

Tip: For the most part don’t listen to anyone that says they tried targeting and it didn’t work. Except for rare situations, targeting will help increase conversion if you find a recipe that resonates with users. Like I mentioned in the cons section, good targeting requires a deep understanding of your visitors, but when you figure it out, you can really provide a great experience and improve conversion.

[BONUS]
7. The learning or monitoring campaign

I LOVE learning campaigns. This under-utilized tactic is where you create various visitor segments, track them separately, and show them all the same content. Or perhaps you monitor any patterns based on time of day, day of week, or when your company’s TV commercials air.

Learning campaigns are very helpful because they make it easier to learn about visitor behavior within a very specific page or process that you want to optimize. They can often be helpful in creating a content strategy for your segments and sparking test ideas that you would have never thought about.

Have Any Others to Add?
These are some of the more popular types of tests, but it’s certainly not an exhaustive list. I welcome any of my fellow testing practitioners to add to it or provide their own perspective.
Overcome Diminishing Returns of A/B Testing
An introductory guide to the most popular types of A/B Tests