As members of the Moz onboarding team — which gives one-on-one walkthroughs of Moz products to over 500 customers a month — we have our finger on the pulse of what people are asking for when it comes to SEO. We’re here to help you uncover the relevant Moz Pro features for your business.
We know that somewhere along the journey of improving your website and drumming up more traffic (and hopefully conversions), you’ll want to track rankings for your target keywords. Perhaps you started by noticing a traffic drop on your website. Or maybe you’re actively adapting your business in response to new challenges as a result of the COVID-19 pandemic. You’ll ultimately want to know how your page rankings were affected, and start to explore what you can do next.
In this series of Daily SEO Fix videos, the Moz onboarding team takes you through workflows using the Moz Pro tools. We help you coast through your rankings analysis to gain some actionable insights, from tracking your performance against your competitors to making impactful improvements to your pages.
Don’t have a community account or free trial yet? Sign up first, then book your walkthrough to chat with our onboarding team.
One constant in SEO is that ranking positions are always changing. Some keywords tend to move around more than others, and they can be tricky to spot. Luckily, Moz Pro has a simple way to focus on these keywords.
In this Daily Fix, Maddie shows you how you can sort out your keywords by ranking gains and losses, so that you can glean some insight into how to make the relevant improvements.
View rankings over time and vs. competitors
They say you can’t manage what you don’t measure. This is also true for SEO.
By tracking your keywords, you can measure the impact of your SEO efforts and identify strengths, weaknesses, and opportunities to optimize your SEO.
Moz Pro allows you to track your ranking performance over time. You can quickly see exactly what page on your site is ranking in the highest position for a particular keyword, as well as other pages that may be ranking for the same keyword. This helps you easily flag potential keyword cannibalization on your site.
In this Daily Fix, Jo on the learning team will shows you exactly how this works.
There aren’t many things more confusing than seeing pages rank for keywords that have absolutely nothing to do with your business. You’re always signalling something to the search engines — whether you intend to or not. Optimizing your on-page SEO ensures you control that signal.
On-page SEO is the practice of optimizing individual web pages for specific keyword(s) in order to rank higher and earn more relevant traffic in search engines.
In this Daily Fix, I show you how to use the page optimization tool to improve your on-page SEO.
Link building is one of the aspects of SEO that can’t be done in isolation. In order to know how much effort you should dedicate to link building, you first need to look at your competitive landscape.
Moz Pro’s link explorer allows you to compare the link profile of up to five websites. In a snapshot, you get insight into many important metrics like domain authority, spam score, external and follow links, etc. You can easily use the graphs to spot trends in the type of links your competitors are getting, and even click through to see the individual links. In this video, Alicia shows you how.
Technical SEO is table stakes, and arguably the most important aspect of your SEO work.
Even if you use the right keywords, create the most optimized pages, and have every authoritative site in the world linking to you, if the crawlers are’t able to index your pages correctly or you’re not following best technical SEO practices, your pages won’t rank as well as they deserve. Moz Pro’s Site Crawl tool helps you ensure that your technical SEO is on point.
In this Daily Fix, Emilie shows you some tips you can use to improve your rankings with Site Crawl.
Negative SEO can hurt your website and your work in search, even when your rankings are unaffected by it. In this week’s Whiteboard Friday, search expert Russ Jones dives into what negative SEO is, what it can affect beyond rankings, and tips on how to fight it.
Click on the whiteboard image above to open a high resolution version in a new tab!
All right, folks. Russ Jones here and I am so excited just to have the opportunity to do any kind of presentation with the title “Defense Against the Dark Arts.” I’m not going to pretend like I’m a huge Harry Potter fan, but anyway, this is just going to be fun.
But what I want to talk about today is actually pretty bad. It’s the reality that negative SEO, even if it is completely ineffective at doing its primary goal, which is to knock your website out of the rankings, will still play havoc on your website and the likelihood that you or your customers will be able to make correct decisions in the future and improve your rankings.
Today I’m going to talk about why negative SEO still matters even if your rankings are unaffected, and then I’m going to talk about a couple of techniques that you can use that will help abate some of the negative SEO techniques and also potentially make it so that whoever is attacking you gets hurt a little bit in the process, maybe. Let’s talk a little bit about negative SEO.
What is negative SEO?
The most common form of negative SEO is someone who would go out and purchase tens of thousands of spammy links or hundreds of thousands even, using all sorts of different software, and point them to your site with the hope of what we used to call “Google bowling,” which is to knock you out of the search results the same way you would knock down a pin with a bowling ball.
The hope is that it’s sort of like a false flag campaign, that Google thinks that you went out and got all of those spammy links to try to improve your rankings, and now Google has caught you and so you’re penalized. But in reality, it was someone else who acquired those links. Now to their credit, Google actually has done a pretty good job of ignoring those types of links.
It’s been my experience that, in most cases, negative SEO campaigns don’t really affect rankings the way they’re intended to in most cases, and I give a lot of caveats there because I’ve seen it be effective certainly. But in the majority of cases all of those spammy links are just ignored by Google. But that’s not it. That’s not the complete story.
Problem #1: Corrupt data
You see, the first problem is that if you get 100,000 links pointing to your site, what’s really going on in the background is that there’s this corruption of data that’s important to making decisions about search results.
Pushes you over data limits in GSC
For example, if you get 100,000 links pointing to your site, it is going to push you over the limit of the number of links that Google Search Console will give back to you in the various reports about links.
Pushes out the good links
This means that in the second case there are probably links, that you should know about or care about, that don’t show up in the report simply because Google cuts off at 100,000 total links in the export.
Well, that’s a big deal, because if you’re trying to make decisions about how to improve your rankings and you can’t get to the link data you need because it’s been replaced with hundreds of thousands of spammy links, then you’re not going to be able to make the right decision.
Increased cost to see all your data
The other big issue here is that there are ways around it.
You can get the data for more than 100,000 links pointing to your site. You’re just going to have to pay for it. You could come to Moz and use our Link Explorer tool for example. But you’ll have to increase the amount of money that you’re spending in order to get access to the accounts that will actually deliver all of that data.
The one big issue sitting behind all of this is that even though we know Google is ignoring most of these links, they don’t label that for us in any kind of useful fashion. Even after we can get access to all of that link data, all of those hundreds of thousands of spammy links, we still can’t be certain which ones matter and which ones don’t.
Problem #2: Copied content
That’s not the only type of negative SEO that there is out there. It’s the most common by far, but there are other types. Another common type is to take the content that you have and distribute it across the web in the way that article syndication used to work. So if you’re fairly new to SEO, one of the old methodologies of improving rankings was to write an article on your site, but then syndicate that article to a number of article websites and these sites would then post your article and that article would link back to you.
Now the reason why these sites would do this is because they would hope that, in some cases, they would outrank your website and in doing so they would get some traffic and maybe earn some AdSense money. But for the most part, that kind of industry has died down because it hasn’t been effective in quite some time. But once again, that’s not the whole picture.
If all of your content is being distributed to all of these other sites, even if it doesn’t affect your rankings, it still means there’s the possibility that somebody is getting access to your quality content without any kind of attribution whatsoever.
If they’ve stripped out all of the links and stripped out all of the names and all of the bylines, then your hard earned work is actually getting taken advantage of, even if Google isn’t really the arbiter anymore of whether or not traffic gets to that article.
Internal links become syndicated links
Then on the flip side of it, if they don’t remove the attribution, all the various internal links that you had in that article in the first place that point to other pages on your site, those now become syndicated links, which are part of the link schemes that Google has historically gone after.
In the same sort of situation, it’s not really just about the intent behind the type of negative SEO campaign. It’s the impact that it has on your data, because if somebody syndicates an article of yours that has let’s say eight links to other internal pages and they syndicate it to 10,000 websites, well, then you’ve just got 80,000 new what should have been internal links, now external links pointing to your site.
We actually do know just a couple of years back several pretty strong brands got in trouble for syndicating their news content to other news websites. Now I’m not saying that negative SEO would necessarily trigger that same sort of penalty, but there’s the possibility. Even if it doesn’t trigger that penalty, chances are it’s going to sully the waters in terms of your link data.
Problem #3: Nofollowed malware links & hacked content
There are a couple of other miscellaneous types of negative SEO that don’t get really talked about a lot.
Nofollowed malware links in UGC
For example, if you have any kind of user-generated content on your site, like let’s say you have comments for example, even if you nofollow those comments, the links that are included in there might point to things like malware.
We know that Google will ultimately identify your site as not being safe if it finds these types of links.
Unfortunately, in some cases, there are ways to make it look like there are links on your site that aren’t really under your control through things like HTML injection. For example, you can actually do this to Google right now.
You can inject HTML onto the page of part of their website that makes it look like they’re linking to someone else. If Google actually crawled itself, which luckily they don’t in this case, if they crawled that page and found that malware link, the whole domain in the Google search results would likely start to show that this site might not be safe.
Of course, there’s always the issue with hacked content, which is becoming more and more popular.
Fear, uncertainty, and doubt
All of this really boils down to this concept of FUD — fear, uncertainty, and doubt. You see it’s not so much about bowling you out of the search engines. It’s about making it so that SEO just isn’t workable anymore.
1. Lose access to critical data
Now it’s been at least a decade since everybody started saying that they used data-driven SEO tactics, data-driven SEO strategies. Well, if your data is corrupted, if you lose access to critical data, you will not be able to make smart decisions. How will you know whether or not the reason your page has lost rankings to another has anything to do with links if you can’t get to the link data that you need because it’s been filled with 100,000 spammy links?
2. Impossible to discern the cause of rankings lost
This leads to number two. It’s impossible to discern the cause of rankings lost. It could be duplicate content. It could be an issue with these hundreds of thousands of links. It could be something completely different. But because the waters have been muddied so much, it makes it very difficult to determine exactly what’s going on, and this of course then makes SEO less certain.
3. Makes SEO uncertain
The less certain it becomes, the more other advertising channels become valuable. Paid search becomes more valuable. Social media becomes more valuable. That’s a problem if you’re a search engine optimization agency or a consultant, because you have the real likelihood of losing clients because you can’t make smart decisions for them anymore because their data has been damaged by negative SEO.
It would be really wonderful if Google would actually show us in Google Search Console what links they’re ignoring and then would allow us to export only the ones they care about. But something tells me that that’s probably beyond what Google is willing to share. So do we have any kind of way to fight back? There are a couple.
How do you fight back against negative SEO?
1. Canonical burn pages
Chances are if you’ve seen some of my other Whiteboard Fridays, you’ve heard me talk about canonical burn pages. Real simply, when you have an important page on your site that you intend to rank, you should create another version of it that is identical and that has a canonical link pointing back to the original. Any kind of link building that you do, you should point to that canonical page.
The reason is simple. If somebody does negative SEO, they’re going to have two choices. They’re either going to do it to the page that’s getting linked to, or they’re going to do it to the page that’s getting ranked. Normally, they’ll do it to the one that’s getting ranked. Well, if they do, then you can get rid of that page and just hold on to the canonical burn page because it doesn’t have any of these negative links.
Or if they choose the canonical burn page, you can get rid of that one and just keep your original page. Yes, it means you sacrifice the hard earned links that you acquired in the first place, but it’s better than losing the possibility in the future altogether.
2. Embedded styled attribution
Another opportunity here, which I think is kind of sneaky and fun, is what I call embedded styled attribution.
You can imagine that my content might say “Russ Jones says so and so and so and so.” Well, imagine surrounding “Russ Jones” by H1 tags and then surrounding that by a span tag with a class that makes it so that the H1 tag that’s under it is the normal-sized text.
Well, chances are if they’re using one of these copied content techniques, they’re not copying your CSS style sheet as well. When that gets published to all of these other sites, in giant, big letters it has your name or any other phrase that you really want. Now this isn’t actually going to solve your problem, other than just really frustrate the hell out of whoever is trying to screw with you.
But sometimes that’s enough to get them to stop.
3. Link Lists
The third one, the one that I really recommend is Link Lists. This is a feature inside of Moz’s Link Explorer, which allows you to track the links that are pointing to your site. As you get links, real links, good links, add them to a Link List, and that way you will always have a list of links that you know are good, that you can compare against the list of links that might be sullied by a negative SEO campaign.
By using the Link lists, you can discern the difference between what’s actually being ignored by Google, at least to some degree, and what actually matters. I hope this is helpful to some degree. But unfortunately, I’ve got to say, at the end of the day, a sufficiently well-run negative SEO campaign can make the difference in whether or not you use SEO in the future at all.
It might not knock you out of Google, but it might make it so that other types of marketing are just better choices. So hopefully this has been some help. I’d love to talk you in the comments about different ways of dealing with negative SEO, like how to track down who is responsible. So just go ahead and fill those comments up with any questions or ideas.
I would love to hear them. Thanks again and I look forward to talking to you in another Whiteboard Friday.
Reporting fake and duplicate listings to Google sounds hard. Sometimes it can be. But very often, it’s as easy as falling off a log, takes only a modest session of spam fighting and can yield significant local ranking improvements.
If your local business/the local brands your agency markets aren’t using spam fighting as a ranking tactic because you feel you lack the time or skills, please sit down with me for a sec.
What if I told you I spent about an hour yesterday doing something that moved a Home Depot location up 3 spots in a competitive market in Google’s local rankings less than 24 hours later? What if, for you, moving up a spot or two would get you out of Google’s local finder limbo and into the actual local pack limelight?
Today I’m going to show you exactly what I did to fight spam, how fast and easy it was to sweep out junk listings, and how rewarding it can be to see results transform in favor of the legitimate businesses you market.
Who knew that shopping for window coverings would lead me into a den of spammers throwing shade all over Google?
The story of Google My Business spam is now more than a decade in the making, with scandalous examples like fake listings for locksmiths and addiction treatment centers proving how unsafe and unacceptable local business platforms can become when left unguarded.
But even in non-YMYL industries, spam listings deceive the public, waste consumers’ time, inhibit legitimate businesses from being discovered, and erode trust in the spam-hosting platform. I saw all of this in action when I was shopping to replace some broken blinds in my home, and it was such a hassle trying to find an actual vendor amid the chaff of broken, duplicate, and lead gen listings, I decided to do something about it.
I selected an SF Bay area branch of Home Depot as my hypothetical “client.” I knew they had a legitimate location in the city of Vallejo, CA — a place I don’t live but sometimes travel to, thereby excluding the influence of proximity from my study. I knew that they were only earning an 8th place ranking in Google’s Local Finder, pushed down by spam. I wanted to see how quickly I could impact Home Depot’s surprisingly bad ranking.
I took the following steps, and encourage you to take them for any local business you’re marketing, too:
Step 1: Search
While located at the place of business you’re marketing, perform a Google search (or have your client perform it) for the keyword phrase for which you most desire improved local rankings. Of course, if you’re already ranking well as you want to for the searchers nearest you, you can still follow this process for investigating somewhat more distant areas within your potential reach where you want to increase visibility.
In the results from your search, click on the “more businesses” link at the bottom of the local pack, and you’ll be taken to the interface commonly called the “Local Finder.”
The Local Finder isn’t typically 100% identical to the local pack in exact ranking order, but it’s the best place I know of to see how things stand beyond the first 3 results that make up Google’s local packs, telling a business which companies they need to surpass to move up towards local pack inclusion.
Find yourself in the local finder. In my case, the Home Depot location was at position 8. I hope you’re somewhere within the first set of 20 results Google typically gives, but if you’re not, keep paging through until you locate your listing. If you don’t find yourself at all, you may need to troubleshoot whether an eligibility issue, suspension, or filter is at play. But, hopefully that’s not you today.
Populate the spreadsheet by cutting and pasting the basic NAP (name, address, phone) for every competitor ranking above you, and include your own listing, too, of course! If you work for an agency, you’ll need to get the client to help you with this step by filling the spreadsheet out based on their search from their place of business.
In my case, I recorded everything in the first 20 results of the Local Finder, because I saw spam both above and below my “client,” and wanted to see the total movement resulting from my work in that result set.
Step 3: Identify obvious spam
We want to catch the easy fish today. You can go down rabbit holes another day, trying to ferret out weirdly woven webs of lead gen sites spanning the nation, but today, we’re just looking to weed out listings that clearly, blatantly don’t belong in the Local Finder.
Go through these five easy steps:
Look at the Google Streetview image for each business outranking you. Do you see a business with signage that matches the name on the listing? Move on. But if you see a house, an empty parking lot, or Google is marking the listing as “location approximate”, jot that down in the Notes section of your spreadsheet. For example, I saw a supposed window coverings showroom that Streetview was locating in an empty lot on a military base. Big red flag there.
Make note of any businesses that share an address, phone number, or very similar name. Make note of anything with an overly long name that seems more like a string of keywords than a brand. For example, a listing in my set was called: Custom Window Treatments in Fairfield, CA Hunter Douglas Dealer.
For every business you noted down in steps one and two, get on the phone. Is the number a working number? If someone answers, do they answer with the name of the business? Note it down. Say, “Hi, where is your shop located?” If the answer is that it’s not a shop, it’s a mobile business, note that down. Finally, If anything seems off, check the Guidelines for representing your business on Google to see what’s allowed in the industry you’re investigating. For example, it’s perfectly okay for a window blinds dealer to operate out of their home, but if they’re operating out of 5 homes in the same city, it’s likely a violation. In my case, just a couple of minutes on the phone identified multiple listings with phone numbers that were no longer in service.
Visit the iffy websites. Now that you’re narrowing your spreadsheet down to a set of businesses that are either obviously legitimate or “iffy,” visit the websites of the iffy ones. Does the name on the listing match the name on the website? Does anything else look odd? Note it down.
Highlight businesses that are clearly spammy. Your dive hasn’t been deep, but by now, it may have identified one or more listings that you strongly believe don’t belong because they have spammy names, fake addresses, or out-of-service phone numbers. My lightning-quick pass through my data set showed that six of the twenty listings were clearly junk. That’s 30% of Google’s info being worthless! I suggest marking these in red text in your spreadsheet to make the next step fast and easy.
Step 4: Report it!
If you want to become a spam-fighting ace later, you’ll need to become familiar with Google’s Business Redressal Complaint Form which gives you lots of room for sharing your documentation of why a listing should be removed. In fact, if an aggravating spammer remains in the Local Finder despite what we’re doing in this session, this form is where you’d head next for a more concerted effort.
But, today, I promised the easiness of falling off a log, so our first effort at impacting the results will simply focus on the “suggest an edit” function you’ll see on each listing you’re trying to get rid of. This is how you do it:
After you click the “suggest an edit” button on the listing, a popup will appear. If you’re reporting something like a spammy name, click the “change name or other details” option and fill out the form. If you’ve determined a listing represents a non-existent, closed, unreachable, or duplicate entity, choose the “remove this place” option and then select the dropdown entry that most closely matches the problem. You can add a screenshot or other image if you like, but in my quick pass through the data, I didn’t bother.
Record the exact action you took for each spam listing in the “Actions” column of the spreadsheet. In my case, I was reporting a mixture or non-existent buildings, out-of-service phone numbers, and one duplicate listing with a spammy name.
Finally, hit the “send” button and you’re done.
Step 5: Record the results
Within an hour of filing my reports with Google, I received an email like this for 5 of the 6 entries I had flagged:
The only entry I received no email for was the duplicate listing with the spammy name. But I didn’t let this worry me. I went about the rest of my day and checked back in the morning.
I’m not fond of calling out businesses in public. Sometimes, there are good folks who are honestly confused about what’s allowed and what isn’t. Also, I sometimes find screenshots of the local finder overwhelmingly cluttered and endlessly long to look at. Instead, I created a bare-bones representational schematic of the total outcome of my hour of spam-fighting work.
The red markers are legit businesses. The grey ones are spam. The green one is the Home Depot I was trying to positively impact. I attributed a letter of the alphabet to each listing, to better help me see how the order changed from day one to day two. The lines show the movement over the course of the 24 hours.
The results were that:
A stayed the same, and B and C swapping positions was unlikely due to my work; local rankings can fluctuate like this from hour to hour.
Five out of six spam listings I reported disappeared. The keyword-stuffed duplicate listing which was initially at position K was replaced by the brand’s legitimate listing one spot lower than it had been.
The majority of the legitimate businesses enjoyed upward movement, with the exception of position I which went down, and M and R which disappeared. Perhaps new businesses moving into the Local Finder triggered a filter, or perhaps it was just the endless tide of position changes and they’ll be back tomorrow.
Seven new listings made it into the top 20. Unfortunately, at a glance, it looked to me like 3 of these new listings were new spam. Dang, Google!
Most rewardingly, my hypothetical client, Home Depot, moved up 3 spots. What a super easy win!
Fill out the final column in your spreadsheet with your results.
What we’ve learned
You battle upstream every day for your business or clients. You twist yourself like a paperclip complying with Google’s guidelines, seeking new link and unstructured citation opportunities, straining your brain to shake out new content, monitoring reviews like a chef trying to keep a cream sauce from separating. You do all this in the struggle for better, broader visibility, hoping that each effort will incrementally improve reputation, rankings, traffic, and conversions.
Catch your breath. Not everything in life has to be so hard. The river of work ahead is always wide, but don’t overlook the simplest stepping stones. Saunter past the spam listings without breaking a sweat and enjoy the easy upward progress!
Expert local SEOs can spot spam listings in query after query, industry after industry, but Google has yet to staff a workforce or design an algorithm sufficient to address bad data that has direct, real-world impacts on businesses and customers. I don’t know if they lack the skills or the will to take responsibility for this enormous problem they’ve created, but the problem is plain. Until Google steps up, my best advice is to do the smart and civic work of watchdogging the results that most affect the local community you serve. It’s a positive not just for your brand, but for every legitimate business and every neighbor near you.
2. You may get in over your head with spam
You may get in over your head with spam. Today’s session was as simple as possible, but GMB spam can stem from complex, global networks. The Home Depot location I randomly rewarded with a 3-place jump in Local Finder rankings clearly isn’t dedicating sufficient resources to spam fighting or they would’ve done this work themselves.
But the extent of spam is severe. If your market is one that’s heavily spammed, you can quickly become overwhelmed by the problem. In such cases, I recommend that you:
Follow Joy Hawkins, Mike Blumenthal, and Jason Brown, all of whom publish ongoing information on this subject. If you wade into a spam network, I recommend reporting it to one or more of these experts on Twitter, and, if you wish to become a skilled spam fighter yourself, you will learn a lot from what these three have published.
If you don’t want to fight spam yourself, hire an agency that has the smarts to be offering this as a service.
What if you built a local movement? What if you and your friendlier competitors joined forces to knock spam out of Google together? Imagine all of the florists, hair salons, or medical practitioners in a town coming together to watch the local SERPs in shifts so that everyone in their market could benefit from bad actors being reported.
Maybe you’re already in a local business association with many hands that could lighten the work of protecting a whole community from unethical business practices. Maybe your town could then join up with the nearest major city, and that city could begin putting pressure on legislators. Maybe legislators would begin to realize the extent of the impacts when legitimate businesses face competition from fake entities and illegal practices. Maybe new anti-trust and communications regulations would ensue.
Now, I promised you “simple,” and this isn’t it, is it? But every time I see a fake listing, I know I’m looking at a single pebble and I’m beginning to think it may take an avalanche to bring about change great enough to protect both local brands and consumers. Google is now 15 years into this dynamic with no serious commitment in sight to resolve it.
At least in your own backyard, in your own community, you can be one small part of the solution with the easy tactics I’ve shared today, but maybe it’s time for local commerce to begin both doing more and expecting more in the way of protections.
Have you ever wished there were an easy way to see all the top keywords your site is ranking for? How about a competitor’s? What about those times when you’re stumped trying to come up with keywords related to your core topic, or want to know the questions people are asking around your keywords?
There’s plenty of keyword research workflow gold to be uncovered in Keyword Explorer. It’s a tool that can save you a ton of time when it comes to both general keyword research and the nitty-gritty details. And time and again, we hear from folks who are surprised that a tool they use all the time can do [insert cool and helpful thing here] — they had no idea!
Well, let’s remedy that! Starting with today’s post, we’ll be publishing a series of quick videos put together by our own brilliant SEO scientist (and, according to Google, the smartest SEO in the world) Britney Muller. Each one will highlight one super useful workflow to solve a keyword research problem, and most are quick — just under a couple of minutes. Take a gander at the videos or skim the transcripts to find a workflow that catches your eye, and if you’re the type of person who likes to try it out in real time, head to the tool and give it a spin (if you have a Moz Community account like most Moz Blog readers, you already have free access):
You can do this a couple of ways. One is just to enter in a head keyword term that you want to explore — so maybe that’s “SEO” — and you can click Search. From here, you can go to Keyword Suggestions, where you can find all sorts of other keywords relevant to the keyword “SEO.”
We have a couple filters available to help you narrow down that search a little bit better. Here, without doing any filtering, you can see all of these keywords, and they’re ranked by relevancy and then search volume. So you do tend to see the higher search volume keywords at the top.
Save keyword suggestions in a list
But you can go through here and click the keywords that you want to save for your list. You can also do some filtering. We could group keywords by low lexical similarity. What this means is it’s basically just going to take somewhat similar keywords and batch them together for you to make it a bit easier.
Here you can see there are 141 group keywords under “SEO.” Fifty keywords have fallen under “SEO services” and so on. This gives you a higher level, topical awareness of what the keywords look like. If you were to select these groups, you could add a list for these.
When I say add list, I mean you can just save them in a keyword list that you can refer back to time and time again. These lists are amazing, one of my favorite features. What you would basically do is create a new list. I’m just going to call it Test. That adds all of your selected keywords to a list. You can continue adding keywords by different filters.
Filter by which keywords are questions
One of my other favorite things to filter by is “Are questions.” This will give you keywords that are actual questions, and it’s really neat to be able to try to bake these into your content marketing or an FAQ page. Really helpful. You can select all up here. Then I can just add that to that SEO Test list that we already created. I hope this gives you an idea of how to use some of these general filters.
Filter based on closely or broadly related topics and synonyms
You can also filter based on closely related topics, broadly related topics and synonyms. Keywords with similar result pages is very interesting. You can really play around with both of these filters.
Filter by volume
You can also filter by volume. If you are trying to go after those high volume keywords, maybe you set a filter for here. Maybe you’re looking for long tail keywords, and then you’re going to look a little bit on the smaller search volume end here. These can all help in playing around and discovering more keywords.
Find the keywords a domain currently ranks for
Another thing that you can do to expand your keyword research is by entering in a domain. You can see that this changed to root domain when I entered moz.com. If you click Search, you’re going to get all of the keywords that that domain currently ranks for, which is really powerful. You could see all of the ranking keywords, add that to a list, and monitor how your website is performing.
Find competitors’ keywords
If you want to get really strategic, you can plug in some of your competitor sites and see what their keywords are. These are all things that you can do to expand your keyword research set. From there, you’re going to hopefully have one or a couple keyword lists that house all of this data for you to better strategically route your SEO strategy.
If we know that related questions are occurring most often, you can create strategic content around that. The opportunities here with these filters and sorts for keyword opportunities are endless.
2. How to discover ranking keywords for a particular domain or an exact page
See all the keywords a particular domain ranks for
This is super easy to do in Keyword Explorer. You just go to the main search bar. Let’s just throw in moz.com for example. I can see all the keywords that currently rank for moz.com.
We’re seeing over 114,000, and we get this really beautiful, high-level overview as to what that looks like. You can see the ranking distribution, and then you can even go into all of those ranking keywords in this tab here, which is really cool.
See all the keywords a specific page ranks for
You can do the same exact thing for a specific page. So let’s take the Beginner’s Guide. This will toggle to Exact Page, and you just click Search. Here we’re going to see that it ranks for 804 keywords. You get to know exactly what those are, what the difficulty is, the monthly search volume.
Keep track of those keywords in a list
You can add these things to a list to keep an eye on. It’s also great to do for competitive pages that appear to be doing very well or popular things occurring in your space. But this is just a quick and easy way to see what root domains or exact pages are currently ranking for.
3. How to quickly find keyword opportunities for a URL or a specific page
Find lower-ranking keywords that could be improved upon
I’m just going to paste in the URL to the Beginner’s Guide to SEO in Keyword Explorer. I’m going to look at all of the ranking keywords for this URL, and what I want to do is I want to sort by rank.
I want to see what’s ranking between 4 to 50 and see where or what keywords aren’t doing so well that we could improve upon. Right away we’re seeing this huge monthly search volume keyword, “SEO best practices,” and we’re ranking number 4.
It can definitely be improved upon. You can also go ahead and take a look at keywords that you rank for outside of page 1, meaning you rank 11 or beyond for these keywords. These could definitely also be improved upon. You can save these keywords to a list.
You can export them and strategically create content to improve those results.
4. How to check rankings for a set of keywords
Use keyword lists to check rankings for a subset of keywords
This is pretty easy. So let’s say you have a keyword list for your target keywords. Here I’ve got an SEO Test keyword list. I want to see how Moz is ranking for these keywords.
This is where you would just add Check Rankings For and add your URL. I’m just going to put moz.com, check rankings, and I can immediately see how well we’re doing for these specific keywords.
I can filter highest to lowest and vice versa.
5. How to track your keywords
Set up a Campaign
If you don’t already have a list of your keywords that you would like to track, I suggest watching the General Keyword Research video above to help discover some of those keywords. But if you already have the keywords you know that you want to track for a particular site, definitely set up an account with Moz Pro and set up a Campaign.
It walks you through all of the steps to set up a particular Campaign for a URL. If you already have your Campaign set up, for example this is my Moz Campaign and I want to add say a new list of keywords to track, what you can do is you can come into this dashboard view and then go to Rankings.
If you scroll down here, you can add keywords. So let’s say Moz is breaking into the conversion rate optimization space. I can paste in a list of my CRO keywords, and then I can add a label.
Use keyword labels to track progress on topics over time
Now that’s going to append that tag so I can filter by just CRO keywords. Then I’m going to click Add Keywords. This is going to take a little while to start to kick into gear basically.
But once it starts tracking, once these keywords are added, you’ll get to see them historically over time and even you against your competitors. It’s a really great way to monitor how you’re doing with keywords, where you’re seeing big drops or gains, and how you can better pivot your strategy to target those things.
Discover anything new or especially useful? Let us know on Twitter or here in the comments, and keep an eye out for more quick and fun keyword research workflow videos in the coming weeks — we’ve got some good stuff coming your way, from finding organic CTR for a keyword to discovering SERP feature opportunities and more.
Most marketers understand that links to websites count as “votes” on the web. Google — and other search engines — use these votes to rank web pages in search results. The more votes a page accumulates, the better that page’s chances of ranking in search results.
This is the popularity part of Google’s algorithm, described in the original PageRank patent. But Google doesn’t stop at using links for popularity. They’ve invented a number of clever ways to use links to determine relevance and authority — i.e. what is this page about and is it a trusted answer for the user’s search query?
To rank in Google, it’s not simply the number of votes you receive from popular pages, but the relevance and authority of those links as well.
The principals Google may use grow complex quickly, but we’ve included a number of simple ways to leverage these strategies for more relevant rankings at the bottom of the post.
“Thus, even though the text of the document itself may not match the search terms, if the document is cited by documents whose titles or backlink anchor text match the search terms, the document will be considered a match.”
In a nutshell, if a page links to you using the anchor text “hipster pizza,” there’s a good chance your page is about pizza — and maybe hipsters.
If many pages link to you using variations of “pizza”— i.e. pizza restaurant, pizza delivery, Seattle pizza — then Google can see this as a strong ranking signal.
(In fact, so powerful is this effect, that if you search Google for “hipster pizza” here in Seattle, you’ll see our target for the link above ranking on the first page.)
How to leverage Anchor Text for SEO:
Volumes could be written on this topic. Google’s own SEO Starter Guide recommends a number of anchor text best practices, among them:
Use (and seek) descriptive anchor text that describes what your page is about
Over-optimization can signal manipulation to Google, and many SEOs recommend a strategy of anchor text variety for better rankings.
2. Hub and authority pages
In the early days of Google, not long after Larry Page figured out how to rank pages based on popularity, the Hilltop algorithm worked out how to rank pages on authority. It accomplished this by looking for “expert” pages linking to them.
An expert page is a document that links to many other topically relevant pages. If a page is linked to from several expert pages, then it is considered an authority on that topic and may rank higher.
A similar concept using “hub” and “authority” pages was put forth by Jon Kleinberg, a Cornell professor with grants from Google and other search engines. Kleinberg explains:
While we can’t know the degree to which these concepts are used today, Google acquired the Hilltop algorithm in 2003.
How to leverage Authority Pages for SEO:
A common practice of link builders today is to seek links from “Resource Pages.” These are basically Hub/Expert pages that link out to helpful sites around a topic. Scoring links on these pages can often help you a ton.
The idea behind Google’s Reasonable Surfer patent is that certain links on a page are more important than others, and thus assigned increase weight. Examples of more important links include:
Prominent links, higher up in the HTML
Topically relevant links, related to both the source document and the target document.
Conversely, less important links include:
“Terms of Service” and footer links
Links unrelated to the document
Because the important links are more likely to be clicked by a “reasonable surfer,” a topically relevant link can carry more weight than an off-topic one.
“…when a topical cluster associated with the source document is related to a topical cluster associated with the target document, the link has a higher probability of being selected than when the topical cluster associated with the source document is unrelated to the topical cluster associated with the target document.” – United States Patent: 7716225
How to leverage Reasonable Surfer for SEO:
The key with leveraging Reasonable Surfer for SEO is simply: work to obtain links that are more likely to get clicked.
This means that you not only benefit from getting links from prominent areas of high-traffic pages, but the more relevant the link is to the topic of the hosting page, the more benefit it may provide.
Neither page topics/anchor texts have to be an exact match, but it helps if they are in the same general area. For example, if you were writing about “baseball,” links with relevant anchor text from pages about sports, equipment, athletes, training, exercise, tourism, and more could all help boost rankings more than less relevant links.
4. Topic-sensitive PageRank
Despite rumors to the contrary, PageRank is very much alive and well at Google.
PageRank technology can be used to distribute all kinds of different ranking signals throughout a search index. While the most common examples are popularity and trust, another signal is topical relevance, as laid out in this paper by Taher Haveliwala, who went on to become a Google software engineer.
The original concept works by grouping “seed pages” by topic (for example, the Politics section of the New York Times). Every link out from these pages passes on a small amount of Topic-Sensitive PageRank, which is passed on through the next set of links, and so on.
In the example above, 2 identical pages target “Football”. Both have the same number of links, but the first one has more relevant Topic-Sensitive PageRank from a linking sports page. Hence, it ranks higher.
How to leverage topic-sensitive PageRank for SEO:
The concept is simple. When obtaining links, try to get links from pages that are about the same topic you want to rank for. Also, get links from pages that are themselves linked to by authoritative pages on the same topic.
What’s important to understand is that phrase-based indexing allows search engines to score the relevancy of any link by looking for related phrases in both the source and target pages. The more related phrases, the higher the score.
In the example below, the first page with the anchor text link “US President” may carry more weight because the page also contains several other phrases related to “US President” and “John Adams.”
In addition to ranking documents based on the most relevant links, phrase-based indexing allows search engines to consider less relevant links as well, including:
Discounting spam and off-topic links: For example, an injected spam link to a gambling site from a page about cookie recipes will earn a very low outlink score based on relevancy and would carry less weight.
Fighting “Google Bombing”: For those that remember, Google bombing is the art of ranking a page highly for funny or politically-motivated phrases by “bombing” it with anchor text links, often unrelated to the page itself. Phrase-based indexing can stop Google bombing by scoring the links for relevance against the actual text on the page. This way, irrelevant links can be discounted.
How to leverage phased-based indexing for SEO:
Beyond anchor text and the general topic/authority of a page, it’s helpful to seek links from pages with related phrases.
This is especially helpful for on-page SEO and internal linking — when you optimize your own pages and link to yourself. Some people use LSI keywords for on-page optimization, though evidence that this helps SEO is disputed.
Solid keyword research typically provides a starting point to identify related keyword phrases. Below are closely related phrases to “best SEO tools” found using Keyword Explorer.
6. Local inter-connectivity
Local inter-connectivity refers to a reranking concept that reorders search results based on measuring how often each page is linked to by all the other pages.
To put it simply, when a page is linked to from a number of high-ranking results, it is likely more relevant than a page with fewer links from the same set of results.
This also provides a strong hint as to the types of links you should be seeking: pages that already rank highly for your target term.
How to leverage local inter-connectivity for SEO:
Quite simply, one of the easiest ways to rank is to obtain topically relevant links from sites that already rank for the term you are targeting.
Oftentimes, links from page 1 results can be quite difficult to obtain, so it’s helpful to look for links that:
Rank for variations of your target terms
Are further down in Google’s results pages
Rank well for different, but still topically-related terms
7. The golden question
If the above concepts seem complex, the good news is you don’t have to actually understand the above concepts when trying to build links to your site.
To understand if a link is topically relevant to your site, simply ask yourself the golden question of link building: Will this link bring engaged, highly qualified visitors to my website?
The result of the golden question is exactly what Google engineers are trying to determine when evaluating links, so you can arrive at a good end result without understanding the actual algorithms.
How to leverage the golden question for SEO:
Above all else, try to build links that bring engaged, high-value visitors to your site.
If you don’t care about the visitors a link may bring, why should Google care highly about the link?
SEO tips for topically relevant links
Consider this advice when thinking about links for SEO:
DO use good, descriptive anchor text for your links. This applies to internal links, outlinks to other sites, and links you seek from non-biased external sites.
DO seek relationships from authoritative, topically relevant sites. These include sites that rank well for your target keyword and “expert” pages that link to many authority sites. (For those interested, Majestic has done some interesting work around Topical Trust Flow.)
DO seek links from relevant pages. This includes examining the title, body, related phrases, and intent of the page to ensure its relevance to your target topic.
DO seek links that people are likely to click. The ideal link is often both topically relevant and placed in a prominent position.
Finally, DO try to earn and attract links to your site with high quality, topically relevant content. Big thanks to Bill Slawski and his blog SEO by the Sea, which acted as a starting point of research for many of these concepts.
What are your best tips around topically relevant links? Let us know in the comments below!
Note: A version of this post was published previously, and has since been substantially updated. Big thanks to Bill Slawski and his blog SEO by the Sea, which acted as a starting point of research for many of these concepts.
Correlation studies have been a staple of the search engine optimization community for many years. Each time a new study is released, a chorus of naysayers seem to come magically out of the woodwork to remind us of the one thing they remember from high school statistics — that “correlation doesn’t mean causation.” They are, of course, right in their protestations and, to their credit, and unfortunate number of times it seems that those conducting the correlation studies have forgotten this simple aphorism.
We collect a search result. We then order the results based on different metrics like the number of links. Finally, we compare the orders of the original search results with those produced by the different metrics. The closer they are, the higher the correlation between the two.
That being said, correlation studies are not altogether fruitless simply because they don’t necessarily uncover causal relationships (ie: actual ranking factors). What correlation studies discover or confirm are correlates.
Correlates are simply measurements that share some relationship with the independent variable (in this case, the order of search results on a page). For example, we know that backlink counts are correlates of rank order. We also know that social shares are correlates of rank order.
Correlation studies also provide us with direction of the relationship. For example, ice cream sales are positive correlates with temperature and winter jackets are negative correlates with temperature — that is to say, when the temperature goes up, ice cream sales go up but winter jacket sales go down.
Finally, correlation studies can help us rule out proposed ranking factors. This is often overlooked, but it is an incredibly important part of correlation studies. Research that provides a negative result is often just as valuable as research that yields a positive result. We’ve been able to rule out many types of potential factors — like keyword density and the meta keywords tag — using correlation studies.
Unfortunately, the value of correlation studies tends to end there. In particular, we still want to know whether a correlate causes the rankings or is spurious. Spurious is just a fancy sounding word for “false” or “fake.” A good example of a spurious relationship would be that ice cream sales cause an increase in drownings. In reality, the heat of the summer increases both ice cream sales and people who go for a swim. That swimming can cause drownings. So while ice cream sales is a correlate of drowning, it is *spurious.* It does not cause the drowning.
How might we go about teasing out the difference between causal and spurious relationships? One thing we know is that a cause happens before its effect, which means that a causal variable should predict a future change.
An alternative model for correlation studies
I propose an alternate methodology for conducting correlation studies. Rather than measure the correlation between a factor (like links or shares) and a SERP, we can measure the correlation between a factor and changes in the SERP over time.
The process works like this:
Collect a SERP on day 1
Collect the link counts for each of the URLs in that SERP
Look for any URLs are out of order with respect to links; for example, if position 2 has fewer links than position 3
Record that anomaly
Collect the same SERP in 14 days
Record if the anomaly has been corrected (ie: position 3 now out-ranks position 2)
Repeat across ten thousand keywords and test a variety of factors (backlinks, social shares, etc.)
So what are the benefits of this methodology? By looking at change over time, we can see whether the ranking factor (correlate) is a leading or lagging feature. A lagging feature can automatically be ruled out as causal. A leading factor has the potential to be a causal factor.
We collect a search result. We record where the search result differs from the expected predictions of a particular variable (like links or social shares). We then collect the same search result 2 weeks later to see if the search engine has corrected the out-of-order results.
Following this methodology, we tested 3 different common correlates produced by ranking factors studies: Facebook shares, number of root linking domains, and Page Authority. The first step involved collecting 10,000 SERPs from randomly selected keywords in our Keyword Explorer corpus. We then recorded Facebook Shares, Root Linking Domains, and Page Authority for every URL. We noted every example where 2 adjacent URLs (like positions 2 and 3 or 7 and 8) were flipped with respect to the expected order predicted by the correlating factor. For example, if the #2 position had 30 shares while the #3 position had 50 shares, we noted that pair. Finally, 2 weeks later, we captured the same SERPs and identified the percent of times that Google rearranged the pair of URLs to match the expected correlation. We also randomly selected pairs of URLs to get a baseline percent likelihood that any 2 adjacent URLs would switch positions. Here were the results…
It’s important to note that it is incredibly rare to expect a leading factor to show up strongly in an analysis like this. While the experimental method is sound, it’s not as simple as a factor predicting future — it assumes that in some cases we will know about a factor before Google does. The underlying assumption is that in some cases we have seen a ranking factor (like an increase in links or social shares) before Googlebot has and that in the 2 week period, Google will catch up and correct the incorrectly ordered results. As you can expect, this is a rare occasion. However, with a sufficient number of observations, we should be able to see a statistically significant difference between lagging and leading results. However, the methodology only detects when a factor is both leading and Moz Link Explorer discovered the relevant factor before Google.
Facebook Shares Controlled for PA
Root Linking Domains
In order to create a control, we randomly selected adjacent URL pairs in the first SERP collection and determined the likelihood that the second will outrank the first in the final SERP collection. Approximately 18.93% of the time the worse ranking URL would overtake the better ranking URL. By setting this control, we can determine if any of the potential correlates are leading factors – that is to say that they are potential causes of improved rankings.
Facebook Shares performed the worst of the three tested variables. Facebook Shares actually performed worse than random (18.31% vs 18.93%), meaning that randomly selected pairs would be more likely to switch than those where shares of the second were higher than the first. This is not altogether surprising as it is the general industry consensus that social signals are lagging factors — that is to say the traffic from higher rankings drives higher social shares, not social shares drive higher rankings. Subsequently, we would expect to see the ranking change first before we would see the increase in social shares.
Raw root linking domain counts performed substantially better than shares at ~20.5%. As I indicated before, this type of analysis is incredibly subtle because it only detects when a factor is both leading and Moz Link Explorer discovered the relevant factor before Google. Nevertheless, this result was statistically significant with a P value <0.0001 and a 95% confidence interval that RLDs will predict future ranking changes around 1.5% greater than random.
By far, the highest performing factor was Page Authority. At 21.5%, PA correctly predicted changes in SERPs 2.6% better than random. This is a strong indication of a leading factor, greatly outperforming social shares and outperforming the best predictive raw metric, root linking domains.This is not unsurprising. Page Authority is built to predict rankings, so we should expect that it would outperform raw metrics in identifying when a shift in rankings might occur. Now, this is not to say that Google uses Moz Page Authority to rank sites, but rather that Moz Page Authority is a relatively good approximation of whatever link metrics Google is using to determine ranking sites.
There are so many different experimental designs we can use to help improve our research industry-wide, and this is just one of the methods that can help us tease out the differences between causal ranking factors and lagging correlates. Experimental design does not need to be elaborate and the statistics to determine reliability do not need to be cutting edge. While machine learning offers much promise for improving our predictive models, simple statistics can do the trick when we’re establishing the fundamentals.