In 2018, Google reported an incredible 3,234 improvements to search. That’s more than 8 times the number of updates they reported in 2009 — less than a decade ago — and an average of almost 9 per day. How have algorithm updates evolved over the past decade, and how can we possibly keep tabs on all of them? Should we even try?
To kick this off, here’s a list of every confirmed count we have (sources at end of post):
2018 – 3,234 “improvements”
2017 – 2,453 “changes”
2016 – 1,653 “improvements”
2013 – 890 “improvements”
2012 – 665 “launches”
2011 – 538 “launches”
2010 – 516 “changes”
2009 – 350–400 “changes”
Unfortunately, we don’t have confirmed data for 2014-2015 (if you know differently, please let me know in the comments).
A brief history of update counts
Our first peek into this data came in spring of 2010, when Google’s Matt Cutts revealed that “on average, [Google] tends to roll out 350–400 things per year.” It wasn’t an exact number, but given that SEOs at the time (and to this day) were tracking at most dozens of algorithm changes, the idea of roughly one change per day was eye-opening.
In fall of 2011, Eric Schmidt was called to testify before Congress, and revealed our first precise update count and an even more shocking scope of testing and changes:
“To give you a sense of the scale of the changes that Google considers, in 2010 we conducted 13,311 precision evaluations to see whether proposed algorithm changes improved the quality of its search results, 8,157 side-by-side experiments where it presented two sets of search results to a panel of human testers and had the evaluators rank which set of results was better, and 2,800 click evaluations to see how a small sample of real-life Google users responded to the change. Ultimately, the process resulted in 516 changes that were determined to be useful to users based on the data and, therefore, were made to Google’s algorithm.”
Later, Google would reveal similar data in an online feature called “How Search Works.” Unfortunately, some of the earlier years are only available via the Internet Archive, but here’s a screenshot from 2012:
Note that Google uses “launches” and “improvements” somewhat interchangeably. This diagram provided a fascinating peek into Google’s process, and also revealed a startling jump from 13,311 precisions evaluations (changes that were shown to human evaluators) to 118,812 in just two years.
Is the Google algorithm heating up?
Since MozCast has kept the same keyword set since almost the beginning of data collection, we’re able to make some long-term comparisons. The graph below represents five years of temperatures. Note that the system was originally tuned (in early 2012) to an average temperature of 70°F. The redder the bar, the hotter the temperature …
You’ll notice that the temperature ranges aren’t fixed — instead, I’ve split the label into eight roughly equal buckets (i.e. they represent the same number of days). This gives us a little more sensitivity in the more common ranges.
The trend is pretty clear. The latter half of this 5-year timeframe has clearly been hotter than the first half. While warming trend is evident, though, it’s not a steady increase over time like Google’s update counts might suggest. Instead, we see a stark shift in the fall of 2016 and a very hot summer of 2017. More recently, we’ve actually seen signs of cooling. Below are the means and medians for each year (note that 2014 and 2019 are partial years):
2019 – 83.7° / 82.0°
2018 – 89.9° / 88.0°
2017 – 94.0° / 93.7°
2016 – 75.1° / 73.7°
2015 – 62.9° / 60.3°
2014 – 65.8° / 65.9°
Note that search engine rankings are naturally noisy, and our error measurements tend to be large (making day-to-day changes hard to interpret). The difference from 2015 to 2017, however, is clearly significant.
Are there really 9 updates per day?
No, there are only 8.86 – feel better? Ok, that’s probably not what you meant. Even back in 2009, Matt Cutts said something pretty interesting that seems to have been lost in the mists of time…
“We might batch [algorithm changes] up and go to a meeting once a week where we talk about 8 or 10 or 12 or 6 different things that we would want to launch, but then after those get approved … those will roll out as we can get them into production.”
In 2016, I did a study of algorithm flux that demonstrated a weekly pattern evident during clearer episodes of ranking changes. From a software engineering standpoint, this just makes sense — updates have to be approved and tend to be rolled out in batches. So, while measuring a daily average may help illustrate the rate of change, it probably has very little basis in the reality of how Google handles algorithm updates.
Do all of these algo updates matter?
Some changes are small. Many improvements are likely not even things we in the SEO industry would consider “algorithm updates” — they could be new features, for example, or UI changes.
As SERP verticals and features evolve, and new elements are added, there are also more moving parts subject to being fixed and improved. Local SEO, for example, has clearly seen an accelerated rate of change over the past 2-3 years. So, we’d naturally expect the overall rate of change to increase.
A lot of this is also in the eye of the beholder. Let’s say Google makes an update to how they handle misspelled words in Korean. For most of us in the United States, that change isn’t going to be actionable. If you’re a Korean brand trying to rank for a commonly misspelled, high-volume term, this change could be huge. Some changes also are vertical-specific, representing radical change for one industry and little or no impact outside that niche.
On the other hand, you’ll hear comments in the industry along the lines of “There are 3,000 changes per year; stop worrying about it!” To me that’s like saying “The weather changes every day; stop worrying about it!” Yes, not every weather report is interesting, but I still want to know when it’s going to snow or if there’s a tornado coming my way. Recognizing that most updates won’t affect you is fine, but it’s a fallacy to stretch that into saying that no updates matter or that SEOs shouldn’t care about algorithm changes.
Ultimately, I believe it helps to know when major changes happen, if only to understand whether rankings shifted due something we did or something Google did. It’s also clear that the rate of change has accelerated, no matter how you measure it, and there’s no evidence to suggest that Google is slowing down.
Contrary to popular belief, SEO and PPC aren’t at opposite ends of the spectrum. There are plenty of ways the two search disciplines can work together for benefits all around, especially when it comes to optimizing your Google Ads. In this week’s edition of Whiteboard Friday, we’re thrilled to welcome Dana DiTomaso as she explains how you can harness the power of both SEO and PPC for a better Google experience overall.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Hey, Moz readers. My name is Dana DiTomaso, and I’m President and partner at Kick Point. We’re a digital marketing agency way up in the frozen wilds of Edmonton, Alberta. Today I’m going be talking to you about PPC, and I know you’re thinking, “This is an SEO blog. What are you doing here talking about PPC?”
But one of my resolutions for 2019 is to bring together SEO and PPC people, because SEO can learn a lot from PPC, and yes, PPC, you also can learn a lot from SEO. I know PPC people are like, “We just do paid. It’s so great.” But trust me, both can work together. In our agency, we do both SEO and PPC, and we work with a lot of companies who have one person, sometimes two and they’re doing everything.
One of the things we try to do is help them run better Ads campaigns. Here I have tips on things that we see all the time, when we start working with a new Ads account, that we end up fixing, and hopefully I can pass this on to you so you can fix it before you have to call an agency to come and fix it for you. One of the things is this is actually a much longer piece than what I can present on this whiteboard. There’s only so much room.
There is actually a blog post on our website, which you can find here. Please check that out and that will have the full nine tips. But I’m just going to break it down to a few today.
1. Too many keywords
First thing, too many keywords. We see this a lot where people, in Google it says make sure to put together keywords that have the same sort of theme.
But your theme can be really specific, or it can be kind of vague. This is an example, a real example that we got, where the keyword examples were all lawyer themes, so “defense lawyer,” “criminal lawyer,””dui lawyer,” “assault lawyer,” “sexual assault lawyer.” Technically, they all have the same theme of “lawyer,”but that’s way too vague for it to be all in one single ad group, because what kind of ad are you going to show?
“We are lawyers. Call us.” It’s not specific enough. Take for example “dui lawyer,”which I know is a really very competitive niche, and then you can do [dui lawyer], [dui lawyer seattle], and then “dui lawyer” and +dui+lawyer+seattle spelled out a little bit differently. I’ll talk about that in a second. By taking this one thing and then breaking it down into a much more specific ad group, you can really have much more control.
This is a consistent theme in all the tips I talk about is much more control over where you’re spending your money, what keywords you’re spending it on, what your ads are, having a much better landing page to ad match, which is also really important. It just makes your ad life so much easier when you’ve got it in all of those ad groups. I know it might seem intimidating. It’s like, “Well, I have three ad groups now.If I follow your tips, I’m going to have 40.”
But at the same time, it’s way easier to manage 40 well organized groups than it is to manage 3 really badly organized groups. Keep that in mind.
2. Picking the right match type
The next thing is picking the right match type. You can see here I’ve got this bracket stuff and this phrase stuff and these plus signs. There are really four match types.
There’s broad match, which is terrible and don’t ever use it. Broad match is just you writing out the keyword, and then Google just displays it for whatever it feels like is relevant to that particular search. For example, we’ve seen examples where it’s like a catering company and they’ll have “catering” as a keyword, and they’re showing up for all sorts of phrases in catering where they can’t provide catering, so searching for a venue that only does in-house catering. Or they’re spending money on a catering conference or just totally irrelevant stuff. Do not use broad match.
Broad match modifier (BMM)
The upgrade from that is what’s called broad match modifier or BMM, and that’s where these plus signs come in. This is really the words dui, lawyer, and seattle in any order, but they all have to exist and other things can exist around that. It could be, “I need a DUI lawyer in Seattle.” “I live in Seattle. I need a DUI lawyer.” That would also work for that particular keyword.
The next type is phrase, and that’s in the quotes. This “dui lawyer” is the example here, and then you can have anything before it or you can have anything after it, but you can’t have something in between it. It couldn’t be “dui who is really great at being a lawyer” for example. Weak example, but you get the idea. You can’t just shove stuff in the middle of a phrase match.
Then exact match is what’s in the brackets here, and that is just those words and nothing else. If I have [dui lawyer], this keyword, if I didn’t have [dui lawyer seattle], this keyword would not trigger if somebody searches [dui lawyer seattle]. That’s as specific as possible. You really want to try that for your most competitive keywords.
This is the really expensive stuff, because you do not want to waste one single penny on anything that is irrelevant to that particular search. This is your head on, it’s really expensive every click. I’ve got to make sure I’m getting the most money possible for those clicks. That’s where you really want to use exact match.
3. Only one ad per group
Next, tips. The next thing is what we see is a lot of people who have only one ad per group.
Have at least 3 ads per group
This is not a tip. This is a criticism up here. The thing is that maybe, again, you think it’s easy for management, but it’s really hard to see what’s going to work, because if you’re not always testing, how are you going to know if you could do better? Make sure to have at least three ads per group.
Add emotional triggers into your ad copy
Then look at your ad copy. We see a lot of just generic like, “We are the best lawyers. Call us.” There’s nothing there that says I need to call these people. Really think about how you can add those emotional triggers into your copy. Talk to your client or your team, if you work in-house, and find out what are the things that people say when they call. What are the things where they say, “Wow, you really helped me with this” or, “I was feeling like this and then you came in and I just felt so much better.”
That can really help to spice up your ads. We don’t want to get too fancy with this, but we certainly want to make something that’s going to help you stand out. Really add those emotional triggers into your ad copy.
Make sure to have a call to action
Then the next thing is making sure to have a call to action, which seems basic because you think it’s an ad. If you click it, that’s the call to action. But sometimes people on the Internet, they’re not necessarily thinking. You just want to say, “You know what? Just call me or email me or we’re open 24 hours.”
Just be really specific on what you want the person to do when they look at the ad. Just spell it out for them. I know it seems silly. Just tell them. Just tell them what you want them to do. That’s all you need to do.
Then make sure you add in all of the extensions. In Google Ads, if you’re not super familiar with the platform, there’s a section called Extensions. These are things like when the address shows up under an ad, or you’ve got those little links that come up, or you’ve got somebody saying we’re open 24 hours, for example. There are all sorts of different extensions that you can use. Just put in all the extensions that you possibly can for every single one of your groups.
Then they won’t all trigger all at the same time, but at least they’re there and it’s possible that they could trigger. If they do, that’s give your ad more real estate versus your competition, which is really great on mobile because ads take up a lot of space at the top of a mobile search. You want to make sure to shove your competition as far as you possibly can down that search so you own as much of that property as you possibly can. One thing that I do see people doing incorrectly with extensions, though, is setting extensions at say the campaign level, and then you have different ad groups that cover different themes.
Going back to this example over here, with the different types of lawyers, let’s say you had an extension that talks specifically about DUI law, but then it was triggering on say sexual assault law. You don’t want that to happen. Make sure you have really fine-tuned control over your different extensions so you’re showing the right extension with the right type of keyword and the right type of ad. The other thing that we see a lot is where people have location extensions and they’re showing all the location extensions where they should not be showing all the location extensions.
You’ve got an ad group for, say, Seattle, and it’s talking about this new home development that you have, and because you just loaded in all of your location extensions, suddenly you’re showing extensions for something in say San Francisco. It’s just because you haven’t filtered properly. Really double-check to make sure that you’ve got your filter set up properly for your location extensions and that you’re showing the right location extension for the right ad.
I know that Google says, “We’ll pick the locations closest to the client.” But you don’t know where that person is searching right there. They could be in San Francisco at that moment and searching for new home builds in Seattle, because maybe they’re thinking about moving from San Francisco to Seattle. You don’t want them to see the stuff that’s there. You want them to see the stuff that’s at the place where they’re intending to be. Really make sure you control that.
4. Keep display and search separate
Last, but not least, keep display and search separate.
By default, Google so helpfully says, “We’ll just show your ads everywhere. It’s totally cool. This is what we want everyone to do.” Don’t do that. This is what makes Google money. It does not make you money. The reason why is because display network, which is where you’re going to a website and then you see an ad, and search network, when you type in the stuff and you see an ad, are two totally different beasts.
Avoid showing text ads on the display network for greater campaign control
It’s really a different type of experience. To be honest, if you take your search campaigns, which are text-based ads, and now you’re showing them on websites, you’re showing a boring text ad on a website that already has like 50 blinky things and click here. They’re probably not seeing us and maybe they have an ad blocker installed. But if they are, certainly your text ad, which is kind of boring and not intended for that medium, is not going to be the thing that stands out.
Really you’re just wasting your money because you’ll end up with lower relevancy, less clicks, and then Google thinks that your group is bad. Then you’ll end up paying more because Google thinks your group is bad. It really gives you that extra control by saying, “This is the search campaign. It’s only on search. This is the display campaign. It’s only on display.” Keep the two of them totally separate. Then you have lots of control over the search ads being for search and the display ads being for display.
Don’t mix those two up. Make sure to uncheck that by default. Definitely there are more tips on our blog here. But I hope that this will help you get started. SEOs, if you’ve never done a PPC campaign in your life, I recommend just setting one up. Put 50 bucks behind that thing. Just try it out, because I think what will really help you is understanding more of how people search, because as we get less and less keyword data from the different tools that we use to figure out what the heck are people googling when they try to search for our business, ads give you some of that data back.
That’s where ads can be a really great ally in trying to get better SEO results. I hope you found this enjoyable. Thanks so much.
“A good chef has to be a manager, a businessman and a great cook. To marry all three together is sometimes difficult.” – Wolfgang Puck
I like this quote. It makes me hear phones ringing at your local search marketing agency, with aspiring chefs and restaurateurs on the other end of the line, ready to bring experts aboard in the “sometimes difficult” quest for online visibility.
Is your team ready for these clients? How comfortable do you feel talking restaurant Local SEO when such calls come in? When was the last time you took a broad survey of what’s really ranking in this specialized industry?
Allow me to be your prep cook today, and I’ll dice up “best restaurant” local packs for major cities in all 50 US states. We’ll julienne Google Posts usage, rough chop DA, make chiffonade of reviews, owner responses, categories, and a host of other ingredients to determine which characteristics are shared by establishments winning this most superlative of local search phrases.
The finished dish should make us conversant with what it takes these days to be deemed “best” by diners and by Google, empowering your agency to answer those phones with all the breezy confidence of Julia Child.
I looked at the 3 businesses in the local pack for “best restaurants (city)” in a major city in each of the 50 states, examining 11 elements for each entry, yielding 4,950 data points. I set aside the food processor for this one and did everything manually. I wanted to avoid the influence of proximity, so I didn’t search for any city in which I was physically located. The results, then, are what a traveler would see when searching for top restaurants in destination cities.
Now, let’s look at each of the 11 data points together and see what we learn. Take a seat at the table!
Categories prove no barrier to entry
Which restaurant categories make up the dominant percentage of local pack entries for our search?
You might think that a business trying to rank locally for “best restaurants” would want to choose just “restaurant” as their primary Google category as a close match. Or, you might think that since we’re looking at best restaurants, something like “fine dining restaurants” or the historically popular “French restaurants” might top the charts.
Instead, what we’ve discovered is that restaurants of every category can make it into the top 3. Fifty-one percent of the ranking restaurants hailed from highly diverse categories, including Pacific Northwest Restaurant, Pacific Rim Restaurant, Organic, Southern, Polish, Lebanese, Eclectic and just about every imaginable designation. American Restaurant is winning out in bulk with 26 percent of the take, and an additional 7 percent for New American Restaurant. I find this an interesting commentary on the nation’s present gustatory aesthetic as it may indicate a shift away from what might be deemed fancy fare to familiar, homier plates.
Overall, though, we see the celebrated American “melting pot” perfectly represented when searchers seek the best restaurant in any given city. Your client’s food niche, however specialized, should prove no barrier to entry in the local packs.
High prices don’t automatically equal “best”
Do Google’s picks for “best restaurants” share a pricing structure?
It will cost you more than $1000 per head to dine at Urasawa, the nation’s most expensive eatery, and one study estimates that the average cost of a restaurant meal in the US is $12.75. When we look at the price attribute on Google listings, we find that the designation “best” is most common for establishments with charges that fall somewhere in between the economical and the extravagant.
Fifty-eight percent of the top ranked restaurants for our search have the $$ designation and another 25 percent have the $$$. We don’t know Google’s exact monetary value behind these symbols, but for context, a Taco Bell with its $1–$2 entrees would typically be marked as $, while the fabled French Laundry gets $$$$ with its $400–$500 plates. In our study, the cheapest and the costliest restaurants make up only a small percentage of what gets deemed “best.”
There isn’t much information out there about Google’s pricing designations, but it’s generally believed that they stem at least in part from the attribute questions Google sends to searchers. So, this element of your clients’ listings is likely to be influenced by subjective public sentiment. For instance, Californians’ conceptions of priciness may be quite different from North Dakotans’. Nevertheless, on the national average, mid-priced restaurants are most likely to be deemed “best.”
Of anecdotal interest: The only locale in which all 3 top-ranked restaurants were designated at $$$$ was NYC, while in Trenton, NJ, the #1 spot in the local pack belongs to Rozmaryn, serving Polish cuisine at $ prices. It’s interesting to consider how regional economics may contribute to expectations, and your smartest restaurant clients will carefully study what their local market can bear. Meanwhile, 7 of the 150 restaurants we surveyed had no pricing information at all, indicating that Google’s lack of adequate information about this element doesn’t bar an establishment from ranking.
Less than 5 stars is no reason to despair
Is perfection a prerequisite for “best”?
Negative reviews are the stuff of indigestion for restaurateurs, and I’m sincerely hoping this study will provide some welcome relief. The average star rating of the 150 “best” restaurants we surveyed is 4.5. Read that again: 4.5. And the number of perfect 5-star joints in our study? Exactly zero. Time for your agency to spend a moment doing deep breathing with clients.
The highest rating for any restaurant in our data set is 4.8, and only three establishments rated so highly. The lowest is sitting at 4.1. Every other business falls somewhere in-between. These ratings stem from customer reviews, and the 4.5 average proves that perfection is simply not necessary to be “best.”
Breaking down a single dining spot with 73 reviews, a 4.6 star rating was achieved with fifty-six 5-star reviews, four 4-star reviews, three 3-star reviews, two 2-star reviews, and three 1-star reviews. 23 percent of diners in this small review set had a less-than-ideal experience, but the restaurant is still achieving top rankings. Practically speaking for your clients, the odd night when the pho was gummy and the paella was burnt can be tossed onto the compost heap of forgivable mistakes.
Review counts matter, but differ significantly
How many reviews do the best restaurants have?
It’s folk wisdom that any business looking to win local rankings needs to compete on native Google review counts. I agree with that, but was struck by the great variation in review counts across the nation and within given packs. Consider:
35 percent of “best”-ranked restaurants have between 100–499 reviews and another 31 percent have between 500–999 reviews. Taken together that’s 66 percent of contenders having yet to break 1,000 reviews.
A restaurant with less than 100 reviews has only a 1 percent chance of ranking for this type of search.
Anecdotally, I don’t know how much data you would have to analyze to be able to find a truly reliable pattern regarding winning review counts. Consider the city of Dallas, where the #1 spot has 3,365 review, but spots #2 and #3 each have just over 300. Compare that to Tallahassee, where a business with 590 reviews is coming in at #1 above a competitor with twice that many. Everybody ranking in Boise has well over 1,000 reviews, but nobody in Bangor is even breaking into the 200s.
The takeaways from this data point is that the national average review count is 893 for our “best” search, but that there is no average magic threshold you can tell a restaurant client they need to cross to get into the pack. Totals vary so much from city to city that your best plan of action is to study the client’s market and strongly urge full review management without making any promise that hitting 1,000 reviews will ensure them beating out that mysterious competitor who is sweeping up with just 400 pieces of consumer sentiment. Remember, no local ranking factor stands in isolation.
Best restaurants aren’t best at owner responses
How many of America’s top chophouses have replied to reviews in the last 60 days?
With a hat tip to Jason Brown at the Local Search Forum for this example of a memorable owner response to a negative review, I’m sorry to say I have some disappointing news. Only 29 percent of the restaurants ranked best in all 50 states had responded to their reviews in the 60 days leading up to my study. There were tributes of lavish praise, cries for understanding, and seething remarks from diners, but less than one-third of owners appeared to be paying the slightest bit of attention.
On the one hand, this indicates that review responsiveness is not a prerequisite for ranking for our desirable search term, but let’s go a step further. In my view, whatever time restaurant owners may be gaining back via unresponsiveness is utterly offset by what they stand to lose if they make a habit of overlooking complaints. Review neglect has been cited as a possible cause of business closure. As my friends David Mihm and Mike Blumenthal always say:“Your brand is its reviews” and mastering the customer service ecosystem is your surest way to build a restaurant brand that lasts.
For your clients, I would look at any local pack with neglected reviews as representative of a weakness. Algorithmically, your client’s active management of the owner response function could become a strength others lack. But I’ll even go beyond that: Restaurants ignoring how large segments of customer service have moved onto the web are showing a deficit of commitment to the long haul. It’s true that some eateries are famous for thriving despite offhand treatment of patrons, but in the average city, a superior commitment to responsiveness could increase many restaurants’ repeat business, revenue and rankings.
Critic reviews nice but not essential
I’ve always wanted to investigate critic reviews for restaurants, as Google gives them a great deal of screen space in the listings:
How many times were critic reviews cited in the Google listings of America’s best restaurants and how does an establishment earn this type of publicity?
With 57 appearances, Lonely Planet is the leading source of professional reviews for our search term, with Zagat and 10Best making strong showings, too. It’s worth noting that 70/150 businesses I investigated surfaced no critic reviews at all. They’re clearly not a requirement for being considered “best”, but most restaurants will benefit from the press. Unfortunately, there are few options for prompting a professional review. To wit:
Lonely Planet — Founded in 1972, Lonely Planet is a travel guide publisher headquartered in Australia. Critic reviews like this one are written for their website and guidebooks simultaneously. You can submit a business for review consideration via this form, but the company makes no guarantees about inclusion.
Zagat — Founded in 1979, Zagat began as a vehicle for aggregating diner reviews. It was purchased by Google in 2011 and sold off to The Infatuation in 2018. Restaurants can’t request Zagat reviews. Instead, the company conducts its own surveys and selects businesses to be rated and reviewed, like this.
The Infatuation — Founded in 2009 and headquartered in NY, The Infatuation employs diner-writers to create reviews like this one based on multiple anonymous dining experiences that are then published via their app. The also have a SMS-based restaurant recommendation system. They do not accept request from restaurants hoping to be reviewed.
AFAR — Founded in 2009, AFAR is a travel publication with a website, magazine, and app which publishes reviews like this one. There is no form for requesting a review.
As you can see, the surest way to earn a professional review is to become notable enough on the dining scene to gain the unsolicited notice of a critic.
Google Posts hardly get a seat at best restaurant tables
How many picks for best restaurants are using the Google Posts microblogging feature?
As it turns out, only a meager 16 percent of America’s “best” restaurants in my survey have made any use of Google Posts. In fact, most of the usage I saw wasn’t even current. I had to click the “view previous posts on Google” link to surface past efforts. This statistic is much worse than what Ben Fisher found when he took a broader look at Google Posts utilization and found that 42 percent of local businesses had at least experimented with the feature at some point.
For whatever reason, the eateries in my study are largely neglecting this influential feature, and this knowledge could encompass a competitive advantage for your restaurant clients.
Do you have a restaurateur who is trying to move up the ranks? There is some evidence that devoting a few minutes a week to this form of microblogging could help them get a leg up on lazier competitors.
Google Posts are a natural match for restaurants because they always have something to tout, some appetizing food shot to share, some new menu item to celebrate. As the local SEO on the job, you should be recommending an embrace of this element for its valuable screen real estate in the Google Business Profile, local finder, and maybe even in local packs.
Waiter, there’s some Q&A in my soup
What is the average number of questions top restaurants are receiving on their Google Business Profiles?
Commander’s Palace in New Orleans is absolutely stealing the show in my survey with 56 questions asked via the Q&A feature of the Google Business Profile. Only four restaurants had zero questions. The average number of questions across the board was eight.
As I began looking at the data, I decided not to re-do this earlier study of mine to find out how many questions were actually receiving responses from owners, because I was winding up with the same story. Time and again, answers were being left up to the public, resulting in consumer relations like these:
Takeaway: As I mentioned in a previous post, Greg Gifford found that 40 percent of his clients’ Google Questions were leads. To leave those leads up to the vagaries of the public, including a variety of wags and jokesters, is to leave money on the table. If a potential guest is asking about dietary restrictions, dress codes, gift cards, average prices, parking availability, or ADA compliance, can your restaurant clients really afford to allow a public “maybe” to be the only answer given?
I’d suggest that a dedication to answering questions promptly could increase bookings, cumulatively build the kind of reputation that builds rankings, and possibly even directly impact rankings as a result of being a signal of activity.
Looking at both the landing page that Google listings are pointing to and the overall authority of each restaurant’s domain, I found that:
The average PA is 36, with a high of 56 and a low of zero being represented by one restaurant with no website link and one restaurant appearing to have no website at all.
The average DA is 41, with a high of 88, one business lacking a website link while actually having a DA of 56 and another one having no apparent website at all. The lowest linked DA I saw was 6.
PA/DA do not = rankings. Within the 50 local packs I surveyed, 32 of them exhibited the #1 restaurant having a lower DA than the establishments sitting at #2 or #3. In one extreme case, a restaurant with a DA of 7 was outranking a website with a DA of 32, and there were the two businesses with the missing website link or missing website. But, for the most part, knowing the range of PA/DA in a pack you are targeting will help you create a baseline for competing.
While pack DA/PA differs significantly from city to city, the average numbers we’ve discovered shouldn’t be out-of-reach for established businesses. If your client’s restaurant is brand new, it’s going to take some serious work to get up market averages, of course.
Google’s Local Finder “web results” show where to focus management
Which websites does Google trust enough to cite as references for restaurants?
As it turns out, that trust is limited to a handful of sources:
As the above pie chart shows:
The restaurant’s website was listed as a reference for 99 percent of the candidates in our survey. More proof that you still need a website in 2019, for the very good reason that it feeds data to Google.
Yelp is highly trusted at 76 percent and TripAdvisor is going strong at 43 percent. Your client is likely already aware of the need to manage their reviews on these two platforms. Be sure you’re also checking them for basic data accuracy.
OpenTable and Facebook are each getting a small slice of Google trust, too.
Not shown in the above chart are 13 restaurants that had a web reference from a one-off source, like the Des Moines Register or Dallas Eater. A few very famous establishments, like Brennan’s in New Orleans, surfaced their Wikipedia page, although they didn’t do so consistently. I noticed Wikipedia pages appearing one day as a reference and then disappearing the next day. I was left wondering why.
For me, the core takeaway from this factor is that if Google is highlighting your client’s listing on a given platform as a trusted web result, your agency should go over those pages with a fine-toothed comb, checking for accuracy, activity, and completeness. These are citations Google is telling you are of vital importance.
A few other random ingredients
As I was undertaking this study, there were a few things I noted down but didn’t formally analyze, so consider this as mixed tapas:
Menu implementation is all over the place. While many restaurants are linking directly to their own website via Google’s offered menu link, some are using other services like Single Platform, and far too many have no menu link at all.
Reservation platforms like Open Table are making a strong showing, but many restaurants are drawing a blank on this Google listing field, too. Many, but far from all, of the restaurants designated “best” feature Google’s “reserve a table” function which stems from partnerships with platforms like Open Table and RESY.
Order links are pointing to multiple sources including DoorDash, Postmates, GrubHub, Seamless, and in some cases, the restaurant’s own website (smart!). But, in many cases, no use is being made of this function.
Photos were present for every single best-ranked restaurant. Their quality varied, but they are clearly a “given” in this industry.
Independently-owned restaurants are the clear winners for my search term. With the notable exception of an Olive Garden branch in Parkersburg, WV, and a Cracker Barrel in Bismarck, ND, the top competitors were either single-location or small multi-location brands. For the most part, neither Google nor the dining public associate large chains with “best”.
Honorable mentions go to Bida Manda Laotian Bar & Grill for what looks like a gorgeous and unusual restaurant ranking #1 in Raleigh, NC and to Kermit’s Outlaw Kitchen of Tupelo, MS for the most memorable name in my data set. You can get a lot of creative inspiration from just spending time with restaurant data.
A final garnish to our understanding of this data
I want to note two things as we near the end of our study:
Local rankings emerge from the dynamic scenario of Google’s opinionated algorithms + public opinion and behavior. Doing Local SEO for restaurants means managing a ton of different ingredients: website SEO, link building, review management, GBP signals, etc. We can’t offer clients a generic “formula” for winning across the board. This study has helped us understand national averages so that we can walk into the restaurant space feeling conversant with the industry. In practice, we’ll need to discover the true competitors in each market to shape our strategy for each unique client. And that brings us to some good news.
As I mentioned at the outset of this survey, I specifically avoided proximity as an influence by searching as a traveler to other destinations would. I investigated one local pack for each major city I “visited”. The glad tidings are that, for many of your restaurant clients, there is going to be more than one chance to rank for a search like “best restaurants (city)”. Unless the eatery is in a very small town, Google is going to whip up a variety of local packs based on the searcher’s location. So, that’s something hopeful to share.
What have we learned about restaurant local SEO?
A brief TL;DR you can share easily with your clients:
While the US shows a predictable leaning towards American restaurants, any category can be a contender. So, be bold!
Mid-priced restaurants are considered “best” to a greater degree than the cheapest or most expensive options. Price for your market.
While you’ll likely need at least 100 native Google reviews to break into these packs, well over half of competitors have yet to break the 1,000 mark.
An average 71 percent of competitors are revealing a glaring weakness by neglecting to respond to reviews – so get in there and start embracing customer service to distinguish your restaurant!
A little over half of your competitors have earned critic reviews. If you don’t yet have any, there’s little you can do to earn them beyond becoming well enough known for anonymous professional reviewers to visit you. In the meantime, don’t sweat it.
About three-quarters of your competitors are completely ignoring Google Posts; gain the advantage by getting active.
Potential guests are asking nearly every competitor questions, and so many restaurants are leaving leads on the table by allowing random people to answer. Embrace fast responses to Q&A to stand out from the crowd.
With few exceptions, devotion to authentic link earning efforts can build up your PA/DA to competitive levels.
Pay attention to any platform Google is citing as a resource to be sure the information published there is a complete and accurate.
The current management of other Google Business Profile features like Menus, Reservations and Ordering paints a veritable smorgasbord of providers and a picture of prevalent neglect. If you need to improve visibility, explore every profile field that Google is giving you.
A question for you: Do you market restaurants? Would you be willing to share a cool local SEO tactic with our community? We’d love to hear about your special sauce in the comments below.
Wishing you bon appétit for working in the restaurant local SEO space, with delicious wins ahead!
Google’s search results have seen a whirlwind of major changes in the past two years. Nearly every type of modern-day search queries produce a combination of rich results beyond the standard blue links — Featured Snippets, People Also Ask boxes, Knowledge Panels, maps, images, or other enhancements. It is now even possible to browse flights, hotels, jobs, events, and other searches that were previously only available via external websites, directly on Google.
As search marketers, we are keenly aware that both Google’s evolving landscape and the rise in new, rich results impact our bottom-line — more SERP enhancements and growth in “position 0” means less organic traffic for everyone else. Last year, Rand Fishkin posted a remarkable Whiteboard Friday pointing out the unsettling trend that has emerged from the updates to Google’s interface: there are fewer organic links to external websites as traffic flows to Google-owned assets within the SERP.
We often hear about how the digital marketing community feels about changes to Google’s interface, but it is less common to hear the opinions of the average searcher who is less technically-savvy. At Path Interactive, we conducted a survey of 1,400 respondents to better understand how they search, how they feel about Google’s search results, and the quality of information the search engine provides.
A note about our respondents
72 percent of respondents were based in the U.S., 8 percent in India, and 10 percent in Europe or the U.K. 67.8 percent considered themselves somewhat technically-savvy or not technically-savvy at all. 71.3 percent were under the age of 40.
How Often Do Searchers Use Google to Find Things?
It shouldn’t be much of a surprise that the vast majority of respondents — 77 percent — use Google 3+ times a day to search for things online. The frequency of Google usage is also inversely correlated with age; 80 percent of 13–21-year-olds use Google more than three times per day, while only 60 percent of respondents over 60 searches with the same frequency.
How often do searchers click ads vs. organic results?
As many previous studies have shown, the vast majority of searchers prefer clicking on organic results to clicking on advertisements. 72 percent of respondents stated that they either click only on organic results, or on organic results the majority of the time. Age also plays a role in one’s decision to click on a paid or organic result: Searchers ages 60+ are 200 percent more likely than 18–21-year-olds not to discriminate between a paid and organic listing. Instead, they click on whichever result-type best answers their question.
Interactions with organic results
The vast majority of respondents remain on the first page of Google to find an answer to their query. 75 percent of respondents either click on the first one or two results, scan page one looking for the most relevant answer to their query, or visit multiple results from page one. 17 percent of respondents stated part of their search behavior includes looking for content from websites or brands that they trust. Only 7 percent of respondents indicated that they browse past the first results page to see as many results as possible.
According to these results, younger users are more likely to click on the first 1–2 results on page one, while older users are more likely to explore additional results, browsing farther down on the first page — or even onto the second and third pages — to find the information they’re looking for.
This trend raises some interesting questions about user behavior: are older searchers more skeptical, and therefore likely to look for a larger variety of answers to their questions? Are younger users more concerned with getting answers quickly, and more likely to settle for the first result they see? Is this tied to the rise in featured snippets? Will this search behavior become the “new normal” as teens grow older, or do younger searchers change their habits over time? If it is the future, will this trend make it even more difficult for organic results that don’t rank in the top three positions to sustain traffic over time?
How do users feel about featured snippets and the Knowledge Panel?
When it comes to how users feel about featured snippets, the majority of searchers say that their behavior depends on what is displayed in the snippet. Marketers who are concerned that snippets steal traffic away from organic results might be pleased to learn that a relatively low number of respondents — only 22.1 percent — indicate that they generally read the snippet and consider their question answered without clicking the blue link.
However, this data suggests another potentially alarming trend as it relates to featured snippet interactions and age: the youngest searchers (13–18) are 220 percent more likely than the oldest searchers (70–100) to consider their question answered without clicking on the snippet (or any) result. Conversely, the older respondents (60–100) are 170 percent more likely to continue searching, depending on the answer in the snippet. This again points to younger searchers seeming to prioritize getting a response quickly, while older users are more likely to spend time evaluating a variety of results.
When it comes to the trustworthiness of featured snippets, most users are on the fence: 44.5 percent of users consider the information “semi-trustworthy,” and continue searching for answers to their questions. However, age once again plays a role in the results. Young searchers (13–30) are 40 percent more likely than older searchers (50+) to trust the information contained in featured snippets. Additionally, the youngest category of searchers (13–18) is 53 percent more likely than average to trust featured snippets.
The same outcome is true for Knowledge Panel results — the majority of users (55.3 percent) scan this information but continue searching through the other results. However, 36.8 percent of searchers consider the information contained in the Knowledge Panel sufficient to answer their questions, and this represents a decent amount of search traffic that previously flowed to paid and organic results before the existence of the Knowledge Panel.
As with previous questions, younger users are significantly more likely to consider read the information in the Knowledge Panel and consider their search complete. Young respondents (13–21) are 102 percent more likely to consider the Knowledge Panel a complete answer to their question than older respondents (50+), who generally continue their search after seeing the Knowledge Panel.
Weather forecasts, things to do, jobs, flights, and other Google SERP features
Google has rolled out many new result types that allow searchers to get the answer to their question directly within the search results. This alarms many search marketers, who worry that these results cannibalize traffic that previously flowed to organic results and have caused an increase in “no click searches.” So, how does the average searcher feel about these enhancements to the SERP?
We asked searchers about two types of results: results that directly answer search queries using a proprietary Google widget (such as weather forecasts or “Things to Do”), as well as results that allow for interaction on Google, but include an organic link back to a corresponding website (such as recipes and flight results).
According to the data, the majority of respondents use these features but continue browsing the other search results. It is interesting to note that one-third of respondents usually ignore result types such as job listings, events, and flights, and instead skip over to the regular blue links. Older searchers (50+) are 63 percent more likely to ignore these results types and continue their search than younger searchers (13–30).
Incorrect information in SERP features
Our next question was whether searchers have found incorrect information in any of the aforementioned result types. Given Google’s increased focus on content quality and E-A-T, we thought it would be interesting to see the general sentiment around the accuracy of these search features.
A combined 58.2 percent of searchers state they have either occasionally or frequently seen incorrect information in rich results on Google. This fact is certainly on Google’s radar: just last month, Google published a whitepaper on how it combats disinformation, and the recent major updates to its algorithm reflect Google’s critical recent quest to promote accurate, trustworthy content in all of its results.
How do users feel about Google?
We wanted to know how users feel about Google in general, especially given all the recent changes to Google’s search results. 68 percent of respondents stated that they feel the quality of Google’s results have improved over time, and the majority of respondents don’t have specific complaints about Google.
Among those respondents who do have issues with Google, the most common complaints involve Google showing too many ads, prioritizing content from large corporations, making it harder for small businesses to compete; and showing too many Google-owned assets within the results.
We also opened up the survey to allow respondents to leave feedback about how they feel about Google and the quality of its results. The vast majority of responses related to user privacy, the unsettling feeling of sharing private information with the search engine, and disliking that search queries are used in retargeting campaigns. Several respondents were concerned about the political and philosophical implications of Google deciding what content should or should not be prominently featured in its results. Some complaints had to do with the limited options to apply filters and perform advanced searches in both standard results, as well as on Google Images.
Searchers are still skeptical of Google, but there’s some cause for concern among younger users
Should businesses and marketers be worried that Google’s increasingly rich results will slowly steal away our precious traffic for good, and increase the number of no-click results? The results from our Google Usage survey indicate that, at least for now, there’s no need to panic: Searchers are still prone to gravitating toward the regular blue links, both organic and paid. They are largely skeptical about taking all of the information included in rich results at face value.
However, there is data to support that younger searchers are more likely to implicitly trust the information provided in rich results, and less likely to visit deeper pages of the search results during their search journeys. This should be an interesting trend for marketers to pay attention to over time — one that raises many philosophical questions about the role that information from Google should play in our lives.
With its recent push for E-A-T compliance, it’s clear that Google is already grappling with the moral responsibility of providing information that can majorly impact the happiness, safety, and well-being of its users. But what happens when important information doesn’t meet the ranking criteria laid out by Google’s algorithm? What happens when society’s understanding of certain topics and ideas changes over time? Does Google’s algorithm create an echo chamber and limit the ability for users to share and discover diverse viewpoints? What happens when the information Google shares is blatantly wrong, or even worse, dangerous?
While it is important that Google maintains the highest quality standards for displaying credible and trustworthy information, freedom of speech and diversity of ideas must also remain of utmost importance, as future generations become increasingly trusting of the information they discover in the search results.
And now, you tell us: how do you feel about Google’s changing landscape?
For the fourth year running, Stone Temple (now a part of Perficient Digital) conducted a study on how much links matter as a ranking factor. We did that using Moz’s Link Explorer and in this year’s study, we looked at the largest data set yet — 27,000 queries.
Our study used quadratic mean calculations on the Spearman correlations across all 27K tested queries. Not sure what that means? You can learn more about the study methodology here.
The major study components included:
Total number of links to the ranking pages
Moz DA of the links to the ranking pages
Moz PA of the links to the ranking pages
Slicing these calculations into several sub-categories:
Informational vs. commercial queries
Medical vs. Financial vs. Technology vs. All Other queries
We were also able to evaluate just how much the Moz link index had grown for a subset of the queries because we have used the same data on 16K of the 27K queries for three years running (this year’s study looked at 9K more queries, but 16K of the queries were in common). In fact, let’s start with that data:
That’s pretty significant growth! Congrats to Moz on that improvement.
Brief commentary on correlations
Correlation studies attempt to measure whether or not two factors are related to one another in any way. We use correlation studies to help us understand whether or not onefactor potentially causes the other It’s important to understand that correlationdoes not prove causation; it simply suggests that it does.
The example I like to share is that there is a strong correlation between the consumption of ice cream and drowning. That does not mean that one causes the other. In fact, the causal factor here is intuitively obvious — hot weather. People eat more ice cream and people do more swimming when it’s hot outside.
But, in the case of links, we also have the fact that Google tells us that links still matter. If that’s not enough for you, Google still penalizes sites for questionable link-building practices. This is not an area they would invest in unless links matter.
So how do correlation scores work?
A correlation score scale runs from -1 to 1. A score of 1 means a perfect correlation between two items. So if we have two variables (x and y), whenever x increases in value, so does y. A score of -1 means the exact opposite: whenever x increases in value, y decreases in value. A score of 0 means there is no perceivable relationship whatsoever. When x increases in value, y is equally likely to increase or decrease in value.
Search is a complex environment to evaluate. Google claims to use over 200 ranking factors. Therefore, it’s quite unlikely that any one factor will be dominant. High scores are not likely to happen at all and correlation scores of 0.2 or higher already start to suggest (but not prove) the existence of a relationship.
Core study results
Time to dive in! First, let’s take a look at the global view across all 27K queries:
This correlation score comes in at a solid 0.293 score. Considering the complexity of the Google algorithm’s 200+ ranking factors, having one single factor come in at a correlation score that high indicates a strong level of correlation.
Next, let’s take a look at the correlation to Moz DA and Moz PA:
Both DA and PA show strong correlations; in fact, more so than the total number of links to the ranking page.
This is interesting because it does suggest that at some level, the authority of the linking site and the linking page both matter. By the way, in the four years that we’ve conducted this study, this is the first time that the DA and PA scores have been a stronger indicator of ranking potential than the pure link count.
More broadly, from a link-building strategy perspective, this provides support for the notion that getting links from more authoritative sites is how you should focus that strategy.
Finally, let’s take a look at how commercial and informational queries differ:
Now that’s interesting — informational queries show a materially higher level of correlation than commercial ones.
From an interpretative perspective, that does not necessarily mean that they matter less. It may just mean that commercial pages get fewer links, so Google has to depend more heavily on other signals. But should those commercial pages happen to draw links for some reason, the impact of the links may still be as high.
The data still shows a strong correlation between links and rankings. Google’s public statements and its actions (in implementing penalties) also tell the same story. In short, links still matter. But we also see a clear indication that the nature and the quality of those links matter too!
Want more information? You can see the Stone Temple link study here.
Tell us what you think — do links matter as a ranking factor?
Most marketers understand that links to websites count as “votes” on the web. Google — and other search engines — use these votes to rank web pages in search results. The more votes a page accumulates, the better that page’s chances of ranking in search results.
This is the popularity part of Google’s algorithm, described in the original PageRank patent. But Google doesn’t stop at using links for popularity. They’ve invented a number of clever ways to use links to determine relevance and authority — i.e. what is this page about and is it a trusted answer for the user’s search query?
To rank in Google, it’s not simply the number of votes you receive from popular pages, but the relevance and authority of those links as well.
The principals Google may use grow complex quickly, but we’ve included a number of simple ways to leverage these strategies for more relevant rankings at the bottom of the post.
“Thus, even though the text of the document itself may not match the search terms, if the document is cited by documents whose titles or backlink anchor text match the search terms, the document will be considered a match.”
In a nutshell, if a page links to you using the anchor text “hipster pizza,” there’s a good chance your page is about pizza — and maybe hipsters.
If many pages link to you using variations of “pizza”— i.e. pizza restaurant, pizza delivery, Seattle pizza — then Google can see this as a strong ranking signal.
(In fact, so powerful is this effect, that if you search Google for “hipster pizza” here in Seattle, you’ll see our target for the link above ranking on the first page.)
How to leverage Anchor Text for SEO:
Volumes could be written on this topic. Google’s own SEO Starter Guide recommends a number of anchor text best practices, among them:
Use (and seek) descriptive anchor text that describes what your page is about
Over-optimization can signal manipulation to Google, and many SEOs recommend a strategy of anchor text variety for better rankings.
2. Hub and authority pages
In the early days of Google, not long after Larry Page figured out how to rank pages based on popularity, the Hilltop algorithm worked out how to rank pages on authority. It accomplished this by looking for “expert” pages linking to them.
An expert page is a document that links to many other topically relevant pages. If a page is linked to from several expert pages, then it is considered an authority on that topic and may rank higher.
A similar concept using “hub” and “authority” pages was put forth by Jon Kleinberg, a Cornell professor with grants from Google and other search engines. Kleinberg explains:
While we can’t know the degree to which these concepts are used today, Google acquired the Hilltop algorithm in 2003.
How to leverage Authority Pages for SEO:
A common practice of link builders today is to seek links from “Resource Pages.” These are basically Hub/Expert pages that link out to helpful sites around a topic. Scoring links on these pages can often help you a ton.
The idea behind Google’s Reasonable Surfer patent is that certain links on a page are more important than others, and thus assigned increase weight. Examples of more important links include:
Prominent links, higher up in the HTML
Topically relevant links, related to both the source document and the target document.
Conversely, less important links include:
“Terms of Service” and footer links
Links unrelated to the document
Because the important links are more likely to be clicked by a “reasonable surfer,” a topically relevant link can carry more weight than an off-topic one.
“…when a topical cluster associated with the source document is related to a topical cluster associated with the target document, the link has a higher probability of being selected than when the topical cluster associated with the source document is unrelated to the topical cluster associated with the target document.” – United States Patent: 7716225
How to leverage Reasonable Surfer for SEO:
The key with leveraging Reasonable Surfer for SEO is simply: work to obtain links that are more likely to get clicked.
This means that you not only benefit from getting links from prominent areas of high-traffic pages, but the more relevant the link is to the topic of the hosting page, the more benefit it may provide.
Neither page topics/anchor texts have to be an exact match, but it helps if they are in the same general area. For example, if you were writing about “baseball,” links with relevant anchor text from pages about sports, equipment, athletes, training, exercise, tourism, and more could all help boost rankings more than less relevant links.
4. Topic-sensitive PageRank
Despite rumors to the contrary, PageRank is very much alive and well at Google.
PageRank technology can be used to distribute all kinds of different ranking signals throughout a search index. While the most common examples are popularity and trust, another signal is topical relevance, as laid out in this paper by Taher Haveliwala, who went on to become a Google software engineer.
The original concept works by grouping “seed pages” by topic (for example, the Politics section of the New York Times). Every link out from these pages passes on a small amount of Topic-Sensitive PageRank, which is passed on through the next set of links, and so on.
In the example above, 2 identical pages target “Football”. Both have the same number of links, but the first one has more relevant Topic-Sensitive PageRank from a linking sports page. Hence, it ranks higher.
How to leverage topic-sensitive PageRank for SEO:
The concept is simple. When obtaining links, try to get links from pages that are about the same topic you want to rank for. Also, get links from pages that are themselves linked to by authoritative pages on the same topic.
What’s important to understand is that phrase-based indexing allows search engines to score the relevancy of any link by looking for related phrases in both the source and target pages. The more related phrases, the higher the score.
In the example below, the first page with the anchor text link “US President” may carry more weight because the page also contains several other phrases related to “US President” and “John Adams.”
In addition to ranking documents based on the most relevant links, phrase-based indexing allows search engines to consider less relevant links as well, including:
Discounting spam and off-topic links: For example, an injected spam link to a gambling site from a page about cookie recipes will earn a very low outlink score based on relevancy and would carry less weight.
Fighting “Google Bombing”: For those that remember, Google bombing is the art of ranking a page highly for funny or politically-motivated phrases by “bombing” it with anchor text links, often unrelated to the page itself. Phrase-based indexing can stop Google bombing by scoring the links for relevance against the actual text on the page. This way, irrelevant links can be discounted.
How to leverage phased-based indexing for SEO:
Beyond anchor text and the general topic/authority of a page, it’s helpful to seek links from pages with related phrases.
This is especially helpful for on-page SEO and internal linking — when you optimize your own pages and link to yourself. Some people use LSI keywords for on-page optimization, though evidence that this helps SEO is disputed.
Solid keyword research typically provides a starting point to identify related keyword phrases. Below are closely related phrases to “best SEO tools” found using Keyword Explorer.
6. Local inter-connectivity
Local inter-connectivity refers to a reranking concept that reorders search results based on measuring how often each page is linked to by all the other pages.
To put it simply, when a page is linked to from a number of high-ranking results, it is likely more relevant than a page with fewer links from the same set of results.
This also provides a strong hint as to the types of links you should be seeking: pages that already rank highly for your target term.
How to leverage local inter-connectivity for SEO:
Quite simply, one of the easiest ways to rank is to obtain topically relevant links from sites that already rank for the term you are targeting.
Oftentimes, links from page 1 results can be quite difficult to obtain, so it’s helpful to look for links that:
Rank for variations of your target terms
Are further down in Google’s results pages
Rank well for different, but still topically-related terms
7. The golden question
If the above concepts seem complex, the good news is you don’t have to actually understand the above concepts when trying to build links to your site.
To understand if a link is topically relevant to your site, simply ask yourself the golden question of link building: Will this link bring engaged, highly qualified visitors to my website?
The result of the golden question is exactly what Google engineers are trying to determine when evaluating links, so you can arrive at a good end result without understanding the actual algorithms.
How to leverage the golden question for SEO:
Above all else, try to build links that bring engaged, high-value visitors to your site.
If you don’t care about the visitors a link may bring, why should Google care highly about the link?
SEO tips for topically relevant links
Consider this advice when thinking about links for SEO:
DO use good, descriptive anchor text for your links. This applies to internal links, outlinks to other sites, and links you seek from non-biased external sites.
DO seek relationships from authoritative, topically relevant sites. These include sites that rank well for your target keyword and “expert” pages that link to many authority sites. (For those interested, Majestic has done some interesting work around Topical Trust Flow.)
DO seek links from relevant pages. This includes examining the title, body, related phrases, and intent of the page to ensure its relevance to your target topic.
DO seek links that people are likely to click. The ideal link is often both topically relevant and placed in a prominent position.
Finally, DO try to earn and attract links to your site with high quality, topically relevant content. Big thanks to Bill Slawski and his blog SEO by the Sea, which acted as a starting point of research for many of these concepts.
What are your best tips around topically relevant links? Let us know in the comments below!
Note: A version of this post was published previously, and has since been substantially updated. Big thanks to Bill Slawski and his blog SEO by the Sea, which acted as a starting point of research for many of these concepts.
Your agency recommends all kinds of useful tactics to help improve the local SEO for your local business clients, but how many of those techniques are leveraging Google Business Profile (GBP) to attract as many walk-ins as possible?
Today, I’m sharing five GBP tweaks worthy of implementation to help turn digital traffic into foot traffic. I’ve ordered them from easiest to hardest, but as you’ll see, even the more difficult ones aren’t actually very daunting — all the more reason to try them out!
1) Answer Google Q&A quickly (they might be leads)
It looks like Coast Nissan has a customer who is ready to walk through the door if they receive an answer. But as you can see, the question has gone unanswered. Note, too, that four people have thumbed the question up, which signifies a shared interest in a potential answer, but it’s still not making it onto the radar of this particular dealership.
Nearly all verticals could have overlooked leads sitting in their GBPs — from questions about dietary options at a restaurant, to whether a retailer stocks a product, to queries about ADA compliance or available parking. Every ask represents a possible lead, and in a competitive retail landscape, who can afford to ignore such an opportunity?
The easiest way for Google My Business (GMB) listing owners and managers to get notified of new questions is via the Google Maps App, as notifications are not yet part of the main GMB dashboard. This will help you catch questions as they arise. The faster your client responds to incoming queries, the better their chances of winning the foot traffic.
2) Post about your proximity to nearby major attractions
Difficulty level: Easy
Imagine someone has just spent the morning at a museum, a landmark, park, or theatre. After exploring, perhaps they want to go to lunch, go apparel shopping, find a gas station, or a bookstore near them. A well-positioned Google Post, like the one below, can guide them right to your client’s door:
This could become an especially strong draw for foot traffic if Google expands its experiment of showing Posts’ snippets not just in the Business Profile and Local Finder, but within local packs:
With a little help from SWIS and Pointy, your retail clients’ GBPs can become the storefront window that beckons in highly-converting foot traffic. Your client’s “See What’s In Store inventory” appears within the Business Profile, letting customers know the business has the exact merchandise they’re looking for:
I’ll reiterate my prediction that SWIS is “next big thing” in local, and when last I spoke with Mark, one percent of all US retailers had already adopted his product. Encourage your retail clients to sign up and give them an amazing competitive edge on driving foot traffic!
4) Make your profile pic a selfie hotspot
Difficulty level: Medium (feasible for many storefronts)
When a client has a physical premise (and community ordinances permit it), an exterior mural can turn through traffic into foot traffic — it also helps to convert Instagram selfie-takers into customers. As I mentioned in a recent blog post, a modest investment in this strategy could appeal to the 43–58 percent of survey respondents who are swayed to shop in locations that are visually appealing.
If a large outdoor mural isn’t possible, there’s plenty of inspiration for smaller indoor murals, here.
Once the client has made the investment in providing a cultural experience for the community, they can try experimenting with getting the artwork placed as the cover photo on their GBP — anyone looking at a set of competitors in a given area will see this appealing, extra reason to choose their business over others.
Mark my words, local search marketers: We are on the verge of seeing Americans reject the constricted label of “consumer” in a quest for a more holistic view of themselves as whole persons. Local businesses that integrate art, culture, and community life into their business models will be well-placed to answer what, in my view, is a growing desire for authentic human experiences. As a local search marketer, myself, this is a topic I plan to explore further this year.
5) Putting time on your side
Difficulty level: Medium (feasible for willing clients)
Here’s a pet peeve of mine: businesses that serve working people but are only open 9–5. How can your client’s foot traffic achieve optimum levels if their doors are only open when everybody is at work?
So, here’s the task: Do a quick audit of the hours posted on the GBPs of your client’s direct competitors. For example, I found three craft shops in one small city with these hours:
Guess which competitor is getting all of the business after 6 PM every day of the week, when most people are off work and able to shop?
Now, it may well be that some of your smaller clients are already working as many hours as they can, but have they explored whether their hours are actually ideal for their customers’ needs and whether any time slots aren’t being filled in the community by their competitors? What if, instead of operating under the traditional 9–5, your client switched to 11–7, since no other competitor in town is open after 5 PM? It’s the same number of hours and your client would benefit from getting all the foot traffic of the 9–5-ers.
Alternatively, instead of closing on Saturdays, the business closed on Mondays — perhaps this is the slowest of their weekdays? Being open on the weekend could mean that the average worker can now access said business and become a customer.
It will take some openness to change, but if a business agrees to implementation, don’t forget to update the GMB hours and push out the new hours to the major citation platforms via service like Moz Local.
Your turn to add your best GMB moves
I hope you’ll take some of these simple GBP tips to an upcoming client meeting. And if they decide to forge ahead with your tips, be sure to monitor the outcomes! How great if a simple audit of hours turned into a foot traffic win for your client?
In the meantime, if you have any favorite techniques, hacks, or easy GMB wins to share with our community, I’d love to read your comments!
But while doing the research for the whitepaper, we found ourselves pondering another question: is there a similar relationship between search intent and the kind of page content that Google sources results from?
We know from our study that as searchers head down the intent funnel, the SERP feature landscape shifts accordingly. For example, Google serves up progressively more shopping boxes, which help close the deal, as a searcher moves from awareness to purchase.
So, as consumers hunt for that perfect product, does the content that Google serves up shift from, say, category pages to product pages? To get to the bottom of this mystery, we mounted a three-pronged attack.
Prong 1: Uncover the top SERP players
Since Google delivers the content they deem most helpful, figuring out who their SERP favs are ensured that we were analyzing the best performing content.
To do this, we used the same 6,422 retail keywords from our original research, segmented them by search intent, and then gathered the top 12 results (give or take a few) that appeared on each SERP.
This gave us:
6,338 informational intent results,
35,210 commercial intent results,
24,633 transactional intent results,
and 10,573 local intent results
…to analyze the stink out of. (That’s 76,754 results all told.)
From there, we dug into root domains (e.g. eBay.com and Amazon.com) to uncover the four most frequently occurring businesses for each search intent category.
We made an executive decision to exclude Google, who claimed top billing across the board, from our analysis for two reasons. One, because we attribute shopping boxes and images to them, which show up a lot for retail keywords, and two, because they aren’t exactly a competitor you can learn from.
Prong 2: Identify content page managers
After compiling the winningest sites to snoop on, it was time to see what kind of content they were offering up to the Google gods — which should’ve been easy, right? Wrong. Unfortunately, examining URL structures for frequently occurring page markers is a somewhat painful process.
Some sites, like Homedepot.com (who we wish had made the list for this very reason), have clean, easy to decipher URL structures: all product and category pages are identified with a “/p/” and “/b/” that always show up in the same spot in the URL.
And then you have the Amazon.coms of the world that use a mix of seemingly random markers, like “/s?rh=” and “/dp” that appear all over the place.
In the end — thanks to Stack Overflow, SequelPro, and a lot of patience — we were able to classify our URLs, bringing us to our third and final prong.
Prong 3: Mash everything together and analyze
Once we got all of our ducks in a row, it was time to get our super-sleuth on.
Informational intent (6,338 results)
This is the very top of the intent funnel. The searcher has identified a need and is looking for information on the best solution — is a [laptop] or [desktop computer] the right choice for their home office; what’s the difference between a [blender] and a [food processor] when making smoothies?
Thanks to the retail nature of our keywords, three product powerhouses — Amazon, Walmart, and Best Buy — rose to the top, along with Wikipedia, whose sole purpose in life is to provide the kind of information that searchers usually want to see at this stage of intent.
Although Wikipedia doesn’t have page markers, we chose to categorize their search results as product pages. This is because each Wikipedia entry typically focuses on a single person, place, or thing. Also, because they weren’t important to our analysis: while Wikipedia is a search competitor, they’re not a product competitor. (We still love you though, Wikipedia!)
Diving into the type of content that Amazon, Walmart, and Best Buy served up (the stuff we were really after), category pages surfaced as the preferred choice.
Given the wide net that a searcher is casting with their informational query, it made sense to see more category pages at this stage — they help searchers narrow down their hunt by providing a wide range of options to choose from.
What did have us raising our eyebrows a little was the number of product pages that appeared. Product pages showcase one specific item and are typically optimized for conversion, so we expected to see these in large quantities further down the funnel — when a searcher has a better idea of what they want.
Commercial intent (35,210 results)
When it comes to a commercial intent query, the searcher is starting to dig deeper into the product they’re after — they’re doing comparative research, reading reviews, and looking into specific functionality.
Here, Amazon continued to rule the URL roost, Wikipedia dropped off, eBay judo-chopped Walmart out of second place, and Best Buy stayed put at the bottom.
In terms of the content that these sites offered up, we saw the addition of review pages from Amazon, and buyer guides from Amazon, eBay, and Best Buy. We figured this would be the case, seeing as how we used modifiers like “best,” “compare,” and “reviews” to apply commercial intent to our keywords.
But while these two types of content fit perfectly with the intent behind a commercial query, especially reviews, oddly enough they still paled in comparison to the number of category and product pages. Weird, right?
Transactional intent (24,633 results)
At the transactional intent stage of the game, the searcher has narrowed their hunt down to a few best options and is ready to throw their hard-earned shekels at the winner.
As far as the most frequently appearing sites go, there was a little do-si-do between eBay and Walmart, but overall, these top four sites did an excellent job following searchers down the intent funnel.
In terms of the kind of pages appearing, once again, we saw a huge number of category pages. Product pages made a respectable showing, but given the readiness to buy at the bottom of the funnel, we expected to see the scales tip in their favor.
Alack and alas, no dice.
Local intent (10,573 results)
Technically, we categorize local intent as a subsection of transactional intent. It’s likely that the only reason a searcher would be considering an in-store visit is if the product is something they want to take home with them. But because local searches typically surface different results from our other transactional queries, we look at them separately.
Here, Amazon’s reign was finally usurped by its biggest competitor, Walmart, and Yelp made a stunning first appearance to knock Best Buy down and eBay off the list.
Given that local intent searchers are on the hunt for a brick-and-mortar store, it made sense that Walmart would win out over Amazon. That said, it’s an incredible feat that Amazon doesn’t let a lack of physical location derail its retail dominance, especially when local is the name of the game (a location is literally part of these queries).
As for Yelp, they’re a trusted source for people trying to find a business IRL — so it wasn’t surprising to see them jump on our local intent SERPs. Like Wikipedia, Yelp doesn’t have product or category pages per se, but they do have markers that indicate pages with multiple business listings (we classified these as category pages), as well as markers that indicate single business listings (our product pages). We also found markers for reviews, which were a perfect fit for our analysis.
Finally, when it came to content, category and product pages (again!) showed up the most on these SERPs. So what’s going on here?
The (unexpected) takeaway
When we set out to examine the type of content that appears for the different search intents, we expected to see far more variation from one level to the next. We thought we’d find lots of category pages for informational intent, more reviews and buyer guides for commercial intent, and mostly product pages for transactional intent.
Instead, we found that category pages are Google’s top choice for retail keywords throughout all levels of search intent. Regardless of how specific a query is, category pages seem to be the first point of access when hunting for retail items. So why might this be?
Looking to our winning sites for answers, it appears that intent-blended pages are the bomb dot com for Amazon, Walmart, eBay, and Best Buy.
Their category pages contain: an image of each type of product and short, descriptive copy to help searchers narrow down their options (informational intent); a review or rating system for quick comparisons (commercial intent); and pricing information and a clear way to make a purchase (transactional intent).
Following any of the items to their designated product page — the second most returned type of content — you’ll find a similar intent-blended approach. In fact, by having alternative suggestions, like “people also bought” and “similar products,” appear on them, they almost resemble category pages.
This product page approach is different from what we often see with smaller boutique-style shops. Take Stutterheim for example (they sell raincoats perfect for our Vancouver weather). Their product pages have a single focus: buy this one thing.
Since smaller shops don’t have a never-ending supply of goods, their product pages have to push harder for the transaction — no distractions allowed. Large retailers like Amazon? They have enough stuff to keep searchers around until they stumble across something they like.
To find out what type of content you should serve at each step of the intent funnel, segment your keywords by search intent and track which of your pages rank, as well as how well they convert. This will help reveal what your searchers find most useful.
Late last week (Feb 28 – Mar 1), we saw a sizable two-day spike in Google rankings flux, as measured by MozCast. Temperatures on Friday reached 108°F. The original temperature on Thursday was 105°F, but that was corrected down to 99°F (more on that later).
Digging in on Friday (March 1st), we saw a number of metrics shift, but most notably was a spike in page-one Google SERPs with more than 10 organic results. Across the 10,000 keywords in MozCast, here’s what we observed at the high end:
Counting “organic” results in 2019 is challenging — some elements, like expanded site-links (in the #1 position), Top Stories, and image results can occupy an organic position. In-depth Articles are particularly challenging (more on that in a moment), and the resulting math usually leaves us with page-one SERPs with counts from 4 to 12. Friday’s numbers were completely beyond anything we’ve seen historically, though, with organic counts up to 19 results.
Dissecting the 19-result SERP
Across 10K keywords, we saw 9 SERPs with 19 results. Below is one of the most straightforward (in terms of counting). There was a Featured Snippet in the #0 position, followed by 19 results that appear organic. This is a direct screenshot from a result for “pumpkin pie recipe” on Google.com/US:
Pardon the long scroll, but I wanted you to get the full effect. There’s no clear marker here to suggest that part of this SERP is a non-organic feature or in some way different. You’ll notice, though, that we transition from more traditional recipe results (with thumbnails) to what appear to be a mix of magazine and newspaper articles. We’ve seen something like this before …
Diving into the depths of in-depth
You may not think much about In-depth Articles these days. That’s in large part because they’re almost completely hidden within regular, organic results. We know they still exist, though, because of deep source-code markers and a mismatch in page-one counts. Here, for example, are the last 6 results from today (March 4th) on a search for “sneakers”:
Nestled in the more traditional, e-commerce results at the end of page one (like Macy’s), you can see articles from FiveThirtyEight, Wired, and The Verge. It’s hard to tell from the layout, but this is a 3-pack of In-depth Articles, which takes the place of a single organic position. So, this SERP appears to have 12 page-one results. Digging into the results on March 1st, we saw a similar pattern, but those 3-packs had expanded to as many as 10 articles.
We retooled the parser to more flexibly detect In-depth Articles (allowing for packs with more than 3 results), and here’s what we saw for prevalence of In-depth Articles over the past two weeks:
Just under 23% of MozCast SERPs on the morning of March 1st had something similar to In-depth Articles, an almost 4X increase from the day before. This number returned to normal (even slightly lower) the next day. It’s possible that our new definition is too broad, and these aren’t really traditional “In-depth” packs, but then we would expect the number to stay elevated. We also saw a large spike in SERP “real-estate” shares for major publications, like the New York Times, which typically dominate In-depth Articles. Something definitely happened around March 1st.
By the new method (removing these results from organic consideration), the temperature for 2/28 dropped from 105°F to 99°F, as some of the unusual results were treated as In-depth Articles and removed from the weather report.
Note that the MozCast temperatures are back-dated, since they represent the change over a 24-hour period. So, the prevalence of In-depth articles on the morning of March 1st is called “3/1” in the graph, but the day-over-day temperature recorded that morning is labeled “2/28” in the graph at the beginning of this post.
Sorting out where to go from here
Is this a sign of things to come? It’s really tough to say. On March 1st, I reached out to Twitter to see if people could replicate the 19-result SERPs and many people were able to, both on desktop and mobile:
This did not appear to be a normal test (which we see roll out to something like 1% or less of searchers, typically). It’s possible this was a glitch on Google’s end, but Google doesn’t typically publicize temporary glitches, so it’s hard to tell.
It appears that the 108°F was, in part, a reversal of these strange results. On the other hand, it’s odd that the reversal was larger than the original rankings flux. At the same time, we saw some other signals in play, such as a drop in image results on page one (about 10.5% day-over-day, which did not recover the next day). It’s possible that an algorithm update rolled out, but there was a glitch in that update.
If you’re a traditional publisher or someone who generally benefits from In-depth Articles, I’d recommend keeping your eyes open. This could be a sign of future intent by Google, or it could simply be a mistake. For the rest of us, we’ll have to wait and see. Fortunately, these results appeared mostly at the end of page one, so top rankings were less impacted, but a 19-result page one would certainly shake-up our assumptions about organic positioning and CTR.
Google My Business (GMB) is one of the most powerful ways to improve a business’ local search engine optimization and online visibility. If you’re a local business, claiming your Google My Business profile is one of the first steps you should take to increase your company’s online presence.
As long as your local business meets Google’s guidelines, your Google My Business profile can help give your company FREE exposure on Google’s search engine. Not only can potential customers quickly see your business’ name, address and phone number, but they can also see photos of your business, read online reviews, find a description about your company, complete a transaction (like book an appointment) and see other information that grabs a searcher’s attention — all without them even visiting your website. That’s pretty powerful stuff!
Google My Business helps with local rankings
Not only is your GMB Profile easily visible to potential customers when they search on Google, but Google My Business is also a key Google local ranking factor. In fact, according to local ranking factor industry research, Google My Business “signals” is the most important ranking factor for local pack rankings. Google My Business signals had a significant increase in ranking importance between 2017 and 2018 — rising from 19% to 25%.
Claiming your Google My Business profile is your first step to local optimization — but many people mistakenly think that just claiming your Google My Business profile is enough. However, optimizing your Google My Business profile and frequently logging into your Google My Business dashboard to make sure that no unwanted updates have been made to your profile is vital to improving your rankings and ensuring the integrity of your business profile’s accuracy.
Google My Business features that make your profile ROCK!
Google offers a variety of ways to optimize and enhance your Google My Business profile. You can add photos, videos, business hours, a description of your company, frequently asked questions and answers, communicate with customers via messages, allow customers to book appointments, respond to online reviews and more.
One of the most powerful ways to grab a searcher’s attention is by creating Google My Business Posts. GMB Posts are almost like mini-ads for your company, products, or services.
Google offers a variety of posts you can create to promote your business:
Posts also allow you to include a call to action (CTA) so you can better control what the visitor does after they view your post — creating the ultimate marketing experience. Current CTAs are:
Posts use a combination of images, text and a CTA to creatively show your message to potential customers. A Post shows in your GMB profile when someone searches for your business’ name on Google or views your business’ Google My Business profile on Google Maps.
Once you create a Post, you can even share it on your social media channels to get extra exposure.
Despite the name, Google My Business Posts are not actual social media posts. Typically the first 100 characters of the post are what shows up on screen (the rest is cut off and must be clicked on to be seen), so make sure the most important words are at the beginning of your post. Don’t use hashtags — they’re meaningless. It’s best if you can create new posts every seven days or so.
Google My Business Posts are a great way to show off your business in a unique way at the exact time when a searcher is looking at your business online.
But there’s a long-standing question: Are businesses actually creating GMB Posts to get their message across to potential customers? Let’s find out…
The big question: Are businesses actively using Google My Business Posts?
There has been a lot of discussion in the SEO industry about Google My Business Posts and their value: Do they help with SEO rankings? How effective are they? Do posts garner engagement? Does where the Posts appear on your GMB profile matter? How often should you post? Should you even create Google My Business Posts at all? Lots of questions, right?
As industry experts look at all of these angles, what do average, everyday business owners actually do when it comes to GMB Posts? Are real businesses creating posts? I set out to find the answer to this question using real data. Here are the details.
Google My Business Post case study: Just the facts
When I set out to discover if businesses were actively using GMB Posts for their companies’ Google My Business profiles, I first wanted to make sure I looked at data in competitive industries and markets. So I looked at a total of 2,000 Google My Business profiles that comprised the top 20 results in the Local Finder. I searched for highly competitive keyword phrases in the top ten cities (based on population density, according to Wikipedia.)
For this case study, I also chose to look at service type businesses.
Here are the results.
New York, Los Angeles, Chicago, Philadelphia, Dallas, San Jose, San Francisco, Washington DC, Houston, and Boston.
real estate agent, mortgage, travel agency, insurance or insurance agents, dentist, plastic surgeon, personal injury lawyer, plumber, veterinarian or vet, and locksmith
Surprise! Out of the industries researched, Personal Injury Lawyers and Locksmiths posted the most often.
For the case study, I looked at the following:
How many businesses had an active Google My Business Post (i.e. have posted in the last seven days)
How many had previously made at least one post
How many have never created a post
Do businesses create Google My Business Posts?
Based on the businesses, cities, and keywords researched, I discovered that more than half of the businesses are actively creating Posts or have created Google My Business Posts in the past.
17.5% of businesses had an active post in the last 7 days
42.1% of businesses had previously made at least one post
40.4% have never created a post
Highlight: A total of 59.60% of businesses have posted a Google My Business Post on their Google My Business profile.
NOTE: If you want to look at the raw numbers, you can check out the research document that outlines all the raw data. (NOTE: Credit for the research spreadsheet template I used and inspiration to do this case study goes to SEO expert Phil Rozek.)
Do searchers engage with Google My Business Posts?
If a business takes the time to create Google My Business Posts, do searchers and potential customers actually take the time to look at your posts? And most importantly, do they take action and engage with your posts?
This chart represents nine random clients, their total post views over a 28-day period, and the corresponding total direct/branded impressions on their Google My Business profiles. When we look at the total number of direct/branded views alongside the number of views posts received, the number of views for posts appears to be higher. This means that a single user is more than likely viewing multiple posts.
This means that if you take the time to create a GMB Post and your marketing message is meaningful, you have a high chance of converting a potential searcher into a customer — or at least someone who is going to take the time to look at your marketing message. (How awesome is that?)
Do searchers click on Google My Business Posts?
So your GMB Posts show up in your Knowledge Panel when someone searches for your business on Google and Google Maps, but do searchers actually click on your post to read more?
When we evaluated the various industry post views to their total direct/branded search views, on average the post is clicked on almost 100% of the time!
Google My Business insights
When you log in to your Google My Business dashboard you can see firsthand how well your Posts are doing. Below is a side-by-side image of a business’ post views and their direct search impressions. By checking your GMB insights, you can find out how well your Google My Business posts are performing for your business!
GMB Posts are worth it
After looking at 2,000 GMB profiles, I discovered a lot of things. One thing is for sure. It’s hard to tell on a week-by-week basis how many companies are using GMB Posts because posts “go dark” every seven business days (unless the Post is an event post with a start and end date.)
Also, Google recently moved Posts from the top of the Google My Business profile towards the bottom, so they don’t stand out as much as they did just a few months ago. This may mean that there’s less incentive for businesses to create posts.
However, what this case study does show us is that businesses that are in a competitive location and industry should use Google My Business optimizing strategies and features like posts if they want to get an edge on their competition.