How do you pick the primary photo for your Etsy listing? You choose the best one and put it first—but which photo is “the best one?” Is it a lifestyle image that portrays your product in use? A detail shot that shows off the hand-crafted aspects? How sure can you really be that THAT image is the best one for bringing customers to your shop? As with all things business, it’s best to base your decision on the numbers! And we have a way for you to do just that.

Check out this example.

Which of these photos is better? Here, the seller was using the lifestyle image of the sandals being worn on somebody’s feet as their primary image. However, their test results showed that the studio shot of the sandals on a plain background would actually receive more views.

How about this one? Which one do you think would generate more sales? Using the logic from the previous test, you might guess that the studio shot of the necklace would bring in more customers, but the test showed exactly the opposite: the image of the necklace being worn is more successful.

People like to have go-to ways of making decisions, rules of thumb, if you will. And as far as photos go, there are definite rules to follow: make sure your subject is well lit, in focus, and free from distracting details. But even with excellent product shots, there’s no hard and fast rule that will tell you WHICH of your kick-ass shots is the one to use. We’ve tested tens of thousands of photos, and we learned that the only way to reliably pick the best photo (aka the one that will bring the highest number of clicks to your shop) is to run a test!

So here we have the A/B test.

“What get’s measured, get’s managed,” right? Making data-backed decisions is the status quo in the world of big business. A/B testing is used by companies like Google, Amazon, Facebook, and Netflix. It is also widely used by researchers and regulatory agencies (like the FDA) around the world to figure out, without a doubt, what works. The difference between us and the technologies being used by those guys is that we’re making these techniques available to small businesses. We don’t want you to have to spend thousands of dollars for this kind of information! We want to let you do your thing while we do ours, in the hopes that with our expertise you’re able to increase your sales and build a more successful business.

Okay, but what IS it?

A/B testing is essentially a method of trying out different options, but in a way that is systematically devised to produce effective results. To make sure that our experiments are reliable, we have to take into consideration a variety of variables that could skew the results.

It’s really important to account for confounding variables. Well, what are those? Let’s say we were to try photo A on Mondays and photo B on Thursdays, and we find that photo A is better. Is that really because the photo is better, or does the shop just get more traffic on Mondays than on Thursdays? We have to somehow avoid or compensate for these types of influences.

There’s also the issue of noise: if we test 2 photos and photo A gets 26 views while photo B gets 31, does that really mean photo B is better, or were some of the views due to chance? If you felt that photo A was better before the test, is that 4-click difference really enough to change your mind? We need to make sure that one image isn’t performing better due to random chance.

Those pesky confounding variables!

In an ideal world, everything would be perfectly constant when we test photo A and photo B. In reality, it’s not so easy. We could test them both on Saturdays, but we also want the time to be the same. We could test them both on Saturdays at 11am, but since we can’t have two primary photos at once, they’d have to be tested on separate weeks. Plus the actual date might matter too. Trying to balance out all of this stuff seems impossible.

However, A/B testing is able to solve this problem by randomly picking the primary photo at any given time. Um, what? You’re just changing them randomly? Yeah, we are—and it works! The thing is that if we flip flop the images every few hours, then on average, all of the other details cancel each other out. True, we won’t be testing the two photos on exactly the same days or at exactly the same times, but we can be pretty dang sure there are no variables systematically altering our experiment.

So noise then…

Once we’re sure we’re rid of any systematic biases, we can deal with noise. We actually build this into our analysis of your data: we use statistical methods to figure out if one photo might just be “getting lucky” and account for that in our results. We’re also able to lessen the impact of luck by running the test for an extended period of time (we know, waiting a full month for your results is annoying). If a photo does well over a longer stretch of time, it’s much less likely that luck is affecting the outcome.

Once we run a test, we take all of this information and process it to figure out empirically whether photo A or photo B will perform better for getting more clicks, and therefore more sales!
(We’re actually pretty conservative here. If your original photo received 5 views and the alternate photo received 10, we do not report that our recommendation will double your traffic. Our algorithms for counteracting noise scales back these kinds of results. That’s one of the ways you know you can trust our projections.)

We double check our stuff.

So here’s the kicker: we generate our suggestions using only half of the data. We’re that good! Just kidding. We take the other half of the stats we collect and use them as a baseline to test how many more views our recommendations receive in comparison to the alternative (this is called cross-validation). If the photos we selected were just getting lucky, they won’t do as well compared with the rest of the data. On the flip side, if the photos we pick are truly better, we should see that they continue to perform that way in the control data. Then, we can be sure that we’re not getting any false positives.

Lastly, we give it a dollar value.

We translate your projected traffic increase into total revenue per month by scaling up sales in proportion with the new views for the listings where we suggest changing your photo. That way, you have a real understanding of how changing your photos will impact your shop, and you’re able to make the most informed decisions possible for your business!

Want to know more?

If you ever have any questions about A/B testing or what we do here at Whatify, feel free to reach out to us! We’re happy to chat.