In this segment, we're going to be talking about A/B testing. At the core, A/B testing or multivariate testing or you can also hear people call it split testing, is very simple. Essentially, all you're doing is putting two or more treatments of some sort of asset in front of people to see which performs better. This is typically something that you're doing with tens of thousands of people so that you have pretty sizable sample size that enables you to determine the statistical significance of whether or not one version performs better than the other. Typically, you will see that the original version is compared to one or more challenger versions, and this actually happens on a live website. Typically, what happens when you are doing this type of testing is that you divert a certain percentage of traffic to the champion and other challenger versions, and typically you balance so that you get an equal number of people going to each one of those versions. You want to randomly select the people who go to each one of those versions to ensure that there isn't any bias in each one of those samples. In this particular example, the champion version has a conversion rate of 33 percent, and as you can see, challenger version one has an increase in conversion of three percent, challenger two has an increase in conversion of six percent so that in this case is the winner. Every site has a different amount of traffic that goes to each page on that site, and that amount of traffic dictates how long it will take for you to collect the data that you need to get sufficient information to know whether or not one version wins out over another one. Depending on how much data you require and what the volume of visits are to each one of the pages that you want to test, you're going to have to think about how many days you're going to need to run the test in order to collect the data that you need. Obviously, this is going to require some baby sitting. You're going to have to look on a daily or maybe weekly basis to understand whether or not you need to continue the test or you have enough data and you can stop the test. If you're lucky enough and you have enough traffic coming to your site, that might mean that you would only run the test for a few hours even. Also, running an A/B test is very much a team effort. You're going to have to have business stakeholders help to determine what the metrics are that you want to impact. You're going to need to have developers help you to develop each one of those versions of the site. You're going to need designers to help you actually execute on some of those designs as well, and then you're going to have to have business intelligence providing you with all of the access to both metrics and analytics so that you have a sense of whether or not you're actually having an impact on performance. Let's walk through an example of an A/B test. The objective for this particular test was to understand the impact of the messaging pricing versus plan on conversion, and this language in this particular example appeared in three different places on this website: on the homepage, the landing page and in the footer. For this particular task, we decided that we wanted to have a level of statistical significance of 95 percent, and that for us meant that we had to have a sample size which we actually have laid down here that would give us the amount of data that we needed to reach that 95 percent level of statistical significance. We also determined that we wanted to have a minimal detectable effect of 20 percent. Essentially, that means that we would need to see a 20 percent change in the impact on conversion in order to make sure that we were having a positive or negative impact on conversion. So, there's a lot of data here. Essentially, each one of these rows is information about the number of conversions both before and after the messaging change. Each one of those rows is a page. There is a conversion types, so here a click or custom event or a pageview. We're also seeing what the conversion rate is for the original language pricing, as well as the variation language plans. You'll notice in this particular example that the new version of messaging failed. There wasn't an improvement in conversion except in one case on one particular page there at the bottom. One of the great things about doing A/B testing is that it's okay to fail, and it's okay to fail because you can fail fast. On many of the sites that you're going to be working on, to be able to reach statistical significance, it only takes an hour or so or maybe even less. You can very quickly determine whether or not something works and take it out of circulation before you negatively impact your conversion rates or other important metrics in a very tangible way. When you're doing A/B testing, you actually are giving yourself the freedom to explore a lot of different options. One of the things that you want to do is think about not only how you want to make improvements but how you might push the envelope a little bit in order to get to a place where you may not have considered going. With each variation, you can iterate. There may be some where you decide to drop out some options, or you might decide to make some tweaks to each one of the options. Each one of those times, you learn a little bit more about what works and what doesn't work given the conditions and context on your particular site. This means that you can start from a place where you don't know exactly what's going to work, but you're going to try four, five different options. Now, the changes that you make may be very subtle or not very subtle, but one thing that you have to think about is you don't want those changes to be too subtle because that might mean that you're missing a potential approach that works better than the things that you are thinking about that are safe. As you continue to iterate, you find that safe option and it works just fine. The conversion does improve, but it doesn't improve at that same level that you might find for that option that falls outside of the original universe of options that you considered. That means that a lot of the time, you should really be pushing yourself to be radical, and that might mean picking something that you think is probably going to fail. It might fail, but it may succeed in a way that might exceed even your expectations. In summary, there are a few things that you should be thinking about when you're doing A/B testing. The first is you want to make sure that you're not doing too much A/B testing. At a certain point, all of those iterations really don't actually return anything at all. You may have already optimized your experience as far as you can. The differences between the options that you test could be very subtle or very significant, and what you need to think about is how subtle or how significant do those changes need to be in order to make that change that you're looking for. You want to choose the paths that matter. So, when you're looking at the success or failure of a particular approach, pick the ones that really succeed obviously so that you can further iterate and improve on those options. Finally, know when to stop. If you've gotten to a point where every version that you're testing has such little impact on the metrics that you're trying to push, then you know it's time to stop.