Google Ads Automation - Why you should always test first
Google has been pushing automation and machine learning features of Google Ads significantly in recent years, and 2019 has continued this trend.
What does Google say about automation?
The main selling point of automated strategies is simple – they save you time. If you manage a larger account, keeping on top of performance manually can be a difficult undertaking. Automation promises you the ability to manage large PPC accounts with less effort. With this it is difficult to argue.
They also say that their automated strategies and features will improve performance. This is where things get a little less simple.
The problem with automation for performance
Nobody understands your business, its goals and KPIs better than you – and this includes Google. Because Google Ads serves millions of advertisers globally, it is simply not possible to gain an intimate understanding of an individual advertiser’s goals and tailor automated strategies accordingly.
Here are a few examples of ‘automated’ Google features and how you can test them to make sure they’re right for you.
Responsive Search Ads
Responsive Search Ads are an automated ad type which has been promoted a lot over the last couple of years. RSAs allow you to create flexible, adaptive ads which rotate based on performance. There is a lot of potential for success here, and RSAs can greatly reduce the time spent split testing ad creatives one by one.
But will they perform better than your trusty Expanded Text Ads? The only way to find out is to test.
Testing Responsive Search Ads
You should test Responsive Search Ads against your existing Expanded Text Ads with your business goals in mind. Here are some things to look out for.
Note which metrics are ‘improving’
You will probably see some difference between the two ad types when testing them. You might, for instance, see a notable improvement in click-through-rate. But what about conversions and cost-per-action?
An improved click-through-rate contributes to a better quality score and lower CPC, but if the ad is proving inferior with regards to actual conversions, does the higher CTR justify the ad?
Providing your data is tracked properly and statistically significant (i.e. is not subject to anomalies, or flukes), you should trust it. If ‘best practices’ seem to be at odds with the data you are seeing, you should side with the data.
As of July 2019, Responsive Text Ads are still in BETA.
Automated bidding strategies
There are lots of different automated bidding strategies to choose from in Google Ads, each placing emphasis on a different result. The standard for a number of years has been Enhanced CPC, which allows Google to tweak bids where it sees good or bad performance, within predetermined parameters.
There are now many more to choose from, such as:
- Maximize conversions
- Portfolio Bidding Strategy (PBS)
- Target Outranking Share
- Target CPA
- Target CPM
The mantra of Google’s Recommendations interface is clearly ‘automated is better’.
It will reward you with a higher “Optimization Score” if you adopt more automated systems and will even estimate the improvements you could see. It is important to remember that these estimates are based on certain assumptions that may not apply to your business goals.
Your optimization score is not based on your performance, nor how well optimized your account is to your own business goals. It is based on how many features you have adopted.
This is not to say automated bidding strategies can’t work wonders – they certainly can. Once again, the trick is to find out through testing.
Testing automated strategies with Drafts & Experiments
Making large scale bidding strategy changes is a risky endeavour. After all, what if the new strategy tanks completely?
You might consider changing the strategy for a limited period of time, perhaps a week or month, and then comparing results. This isn’t a good idea for a simple reason…
External factors, including weather, seasonality and world events can change the way your market behaves. If you start a 2 week test on a new bidding strategy and there’s suddenly a heatwave, can you trust that your strategy change made the difference? Or was it the weather? Either way, this is not a reliable way of testing.
How Drafts & Experiments work
To test new settings fairly, you must give each as close to identical conditions as possible. This means the ads need to be split tested at the same time, to the same audience and in the same locations. Drafts & Experiments are an excellent way to test new settings.
A draft is essentially a clone of an existing campaign. With this you can adjust settings, keywords, bids, or any other variables without changing the main campaign.
An experiment can then be run, where you can split the original campaign’s budget between the campaign and its draft - so you don’t spend any more than normal.
The original campaign maintains its original settings. The draft campaign can then run with the new bidding strategy applied. These will run side by side for a predetermined time; this means, if you see a big change, you know the bidding strategy is the difference maker.
Make sure you run an experiment for long enough to gather sufficient data. This method can be used to test any automated bidding strategies that are applicable to your campaigns.
We strongly recommend running this sort of test before making the switch.
Automation in Google Ads is designed to make your job easier (which is good for you) and ultimately increase your advertising activity (which is good for Google). The fact of the matter is not all automated features will be suitable for what you are trying to achieve.
By approaching each new feature with a little skepticism and vigilance, testing properly and not diving straight in, you are much more likely to get the best out of the features that you do choose to adopt.
If you still prefer the old school manual approach to bid management, try our CPA optimised bulk keyword bidding method.