The pay-per-click advertising industry is constantly changing. New features roll out non-stop – managing a campaign a year ago is probably different today.
But some outdated Google Ads tactics are still useful when changed. Here are four.
Contents
– Advertising –
Quality score
Google defines the Quality Score as “a diagnostic tool to give you an idea of the quality of your ad compared to that of other advertisers”.
The score measures each keyword on a scale of 1 to 10. A higher number indicates consistency throughout the research process. For example, if a user searches for “oval coffee tables,” the ad and the subsequent landing page should refer to the same terms. Keywords with higher quality scores generally have lower click costs over time.
One problem with quality score, however, is that it focuses more on click-through rate than conversion. A keyword may have a poor Quality Score but great conversions. Fine-tuning this keyword could improve the quality score and reduce Conversions.
The quality score isn’t unimportant, but it shouldn’t be the deciding factor. For keywords with low Quality Scores that don’t convert, consider:
- Adding negative keywords,
- Insert target keyword(s) more frequently in ads,
- Updated landing page to sync with announcement message.
A/B testing
Advertisers have already tested ad components by running them against each other in the same ad group. To see which call-to-action, landing page, or ad text worked best, an advertiser created two ads, which Google consistently displayed over time.
This is no longer the case.
Responsive Search Ads contain all titles and descriptions and automatically show the best combinations in search results. Advertisers don’t know which combinations convert, only the aggregate metrics. Even with just two ads, one will inevitably earn a higher impression share depending on the conversion goal. Lack of transparency and uneven ad delivery prevent accurate testing.
The answer is Ad Variations, which tests a basic component of an ad against a trial, 50/50. To test landing pages, an advertiser asks Google to replace this entity half the time. Advertisers can’t see metrics for every combination, but they can see if the base or trial ad performed better.
In the age of automation, ad variations are the most effective way to test components.

Ad Variations experiments reveal the overall performance of the version that performed best. Click on the image to enlarge.
Match type ad groups
Creating ad groups by match type was common before the Variants match type and the phasing out of modified general matches.
For example, “oval coffee table” themed keywords would have required two ad groups with the same keywords. One contained only exact match keywords, while the other had phrase match. Importantly, all keywords in the exact match group would be negative in the phrase match group, allowing the advertiser to control which ads appear. Exact matches would show a set of ads, one phrase would match the other.
Setting the campaign to manual bidding allows advertisers to control the cost (and copy) for each variation, such as $2 on an exact match keyword and $1.50 on a phrase match.
Manual bidding
Manual bidding adjusts bids such as device and location, but smart bidding automatically adjusts for these items and more. The advanced machine learning that Smart Bidding provides is far superior to manual bidding. For example, Smart Bidding takes users’ browsers and operating systems into account.
However, manual bidding is still sometimes useful. For example, bidding above a certain amount on a set of keywords might not be profitable for an advertiser. Manual bidding would set the maximum cost per click, trading the benefits of smart bidding for cost control.