Creating a Campaign Draft and Experiment
In Google Ads, ‘Drafts' and 'Experiments’ can be accessed from the campaign level. The idea is to test a new change (e.g., new audience, or new schedule), then measure our results to understand the impact of the changes before we apply them to a campaign.
Clicking on 'Drafts' allows you to create a new campaign draft. How it works is you treat this like a clone. As the name implies (Campaign drafts), you need to select a campaign because the clone is at the campaign level.
Select the campaign you want to clone...
The tip here is to name it the same as the campaign you are cloning. But this is a draft version, and the idea is to change perhaps one element and run it as an experiment for a time period e.g., 30 days.
You want to identify an element that you wish to split test ideally fifty-fifty and determine from this experiment whether a lift in conversions occurred.
Another tip is to indicate in the name the element you want to split test e.g., ad schedule. It makes for easier reference because in one look you can tell what this experiment is testing.
Click on ‘SAVE’.
You will see 'Draft status: Drafted' and...
the original campaign that you wanted to test. Do NOT click ‘APPLY’ first, because clicking this will apply the drafted settings to your original campaign. At this point we do not have statistically significant data on whether this new change will result in higher conversions. Therefore, do not apply the drafted campaign settings to the original settings first.
Instead, we are going to convert this draft to an experiment to see the impact of our proposed change. If the impact is positive, we can click apply.
In this example, we are testing ad schedule. So click on ‘Ad schedule...
Let’s say we are running the ad from Monday to Friday from 9am to 6pm.
So we do the settings here. The important thing here is to not make any other changes so we can attribute the rise (or drop) in conversion to this ad scheduling.
After saving, click on ‘Experiments’ to create a new campaign experiment. The purpose of experiments is to help us measure our results to understand the impact of the changes before we apply them to a campaign.
Give the experiment a name. (In my example: SkillsFuture-Courses-Ad-Schedule-9-to-6)
Letting it run for a full month should be sufficient.
Experiment split – Leave it at 50%. Meaning fifty percent of our budget will go to the original campaign and fifty percent will go to the drafted.
Experiment split options...
Click on ‘Search-based' to get statistically significant results faster. Then click ‘SAVE’.
You can see the status of your Campaign experiments.
We can have multiple experiments, but only one draft at a time can be run as an experiment for a campaign; and I would only test one element during the experiment period with an experiment split of 50%. This makes it easier to attribute the positive (or negative) impact to that element since I’m only testing one at a time.
Take note that experiments should not be modified. Let it run then decide what to do based on the statistics that you observe.
If there is a better conversion rate after the testing is over, I would proceed to click ‘APPLY’ to apply the ad schedule changes to my original campaign. This approach is better than duplicating campaigns (just for the sake of experimenting) to avoid bidding against myself.
What’s the difference between Draft and Experiment?
Experiment allows you to gather data first before applying the changes to your campaign. On the other hand, with a Draft, you can propose changes and immediately apply the changes to your campaign. In other words, if you apply the draft changes without experimenting, you are not adopting a data-driven approach to your advertising strategy.
What’s the difference between Search-based and Cookie-based in the experiment split?
Cookie-based means users see only one version of your campaign, regardless of how many times they search. Whereas for search-based split, users can see either your experiment or original campaign every time a search occurs.
I did not choose the Cookie-based split because I do not want the searcher to see the same version all the time, which defeats the purpose of the experiment.