Split testing Under the Hood?

answered
#1

I was wondering how netlify’s split testing works under the hood?
Its using Git version control with different branches, but how is it running the branchs on the server ?

Is it using load balancing or something else ?

#2

Hi @JoshuaWalker we don’t have a dedicated area for split testing, as it doesn’t come up that often, but i am moving your question to #deploying-building for now :slight_smile:

#3

At a high level, we have X copies of your site (1 per branch in the split test) served from our CDN. Choice as to which version to serve, as well as the mapping of “been here before, should serve the same version” happens at the CDN edge, at browse time, without any “round trip” to the backing store to decide which version to serve.

How we choose which version to serve is:

  • for new visitors (which means “no nf_ab cookie is set for this site” - so for instance curl is always a new visitor unless you set the cookie to a valid value explicitly), we use the percentages you’ve configured to offer a statistically random version of the site. As we serve them a version, we set that nf_ab cookie to a real number between 0 and 1 to indicate what the random selection was, and it maps to those percentages in the test (so a value of .1 would map to the 0-10% portion of the test)
  • for repeat visitors, we see the nf_ab cookie in the HTTP request, and serve them the appropriate version of the site (which will be the one they saw before, unless you’ve changed your settings in the meantime).

Not sure if that’s what you were looking for or not, but happy to go deeper into whatever follow-up questions you might have.

1 Like
#4

Thats really helpfully, thank you.

Currently you can only run 4 different branches? Allowing you to run two different tests at the same time. 2^2=4

#5

There is only ever one test on a site at a time, using the number of branches you’ve specified. The test goes across every branch in the percentages specified. Our UI limits you to 4 because things do get a bit confusing to display and slice/dice the percentages with the sliders beyond that, but if you needed e.g. 6 for some reason, you could ping us here and we could guide you through using our API to set more branches up.

Another cool trick, which I didn’t mention before, is that you can set up a 0/100% split test. Why would that make any sense? You’d have the 0% branch be a beta, and you can explicitly set an nf_ab cookie to 0 to allow people to use your beta without “accidentally” assigning it to anyone who wasn’t an intended tester.

#6

I understand what you mean.

Sorry I should have explained,
To test 2 changes on the website it would be ideal to run both changes simultaneously to check for conflicts between the tests and to speed up the a b testing process, providing you get enough visitors to get to statistical significance.

I can achieve this by creating another git branch and merging the other tests in, to create each variant.

Eg-
Branch one - No Tests
Branch two - Test A
Branch three - Test B
Branch four - Test A & B

I would be really interested in the api because to run 3 experiments you would need to run 8 variants/branchs.

#7

Sure, that workflow makes sense. I don’t have specific API instructions though I could dig some up if you can’t figure it out using the workflow described here:

If I recall from the last time I worked on this about a year ago, it’s something like:

(first, deploy all branches you’ll use, then):

  • make GET call to get the id of the split test for the site.
  • use that ID to adjust settings in a separate call
#8

Hi ya’ll, haven’t tried split testing myself so far so don’t know exactly only this post’s video suggests that the nf_ab cookie contains the branch name as value instead of numbers?! The post is pretty out dated though… :thinking:

Maybe this could be added to the docs as it’s a pretty cool feature :star_struck: