One of the many bits of news from Google I/O 2019 was that Google would soon start displaying podcasts in search results. “Soon” turned out to be very soon, as we’re already seeing these results surface. Here’s one from a search for our own podcast, MozPod:
While the feature itself is interesting, and the fact that the main result goes to Apple while the episodes go to Google is entertaining, the talk out of I/O suggested something much more intriguing – that Google would soon be indexing podcast content and returning audio clips in search results.
Can Google transcribe audio content?
Is this currently possible? In a word: yes. We know that Google has offered a speech-to-text service as part of Google Cloud Platform since 2017, which has already undergone a few iterations and upgrades. Earlier this year, Android Police spotted source code changes which suggested that Google was proactively transcribing some podcasts on the Google Podcasts platform.
We see evidence of this capability in the broader Google ecosystem. For example, here’s an automatic transcript on my Google Pixel phone for a recent call …
We even see evidence of this capability in search results, but in a different medium. As early as April 2017, Google was testing suggested clips in YouTube videos. Here’s a current example from a search for “how to swim butterfly”:
Note the “Suggested clip” highlighted in the blue box, and starting at the 2:30 mark. What’s interesting is that variations on this search not only produce different videos in some cases, but different clips within the same video. Here’s the result I got back for “how to swim the butterfly” (adding only the definite article “the”):
Now, the suggested clip is 101 seconds long and starts at the 1:54 mark. It’s clear from some suggested clips that the feature is still in its infancy, but it’s difficult to imagine Google being able to implement this feature dynamically without create a transcript of the audio portion of these videos.
Why start with video? For Google, it just makes bottom-line sense. YouTube is a planetary system to the pleasant suburb of Google Podcasts and has an immensely powerful infrastructure backing it. If Google can return results based on the audio portion of a video, it’s only natural they can do the same for audio files.
How will audio surface in search?
The obvious starting points will be extensions of the podcast engine, including automatic transcription and full-text (full-audio) search – both of which already seem to be in the works. Once you can search within Google Podcasts, though, expect that search capability to broaden to general Google searches.
One big question is whether Google will return audio content directly or will use transcribed text. In some cases, returning audio clips may be a better match to searcher intent. If you’re searching for a movie clip or something you heard in a podcast, returning the original is a richer experience than returning plain text. The big advantage, though, will be to voice devices, such as Google Home. Returning audio would fill a content gap for voice devices and provide a direct bridge into full podcasts and other non-text content.
How many podcasts should I start?
We do seem to be in the midst of a minor podcast revival, and audio search may spark that revival. As always, though, expect Google to release changes gradually and test them for weeks or months. If you’re already producing a podcast and want to make it accessible to search, make sure you’re part of the Google Podcasts ecosystem and are entering and updating the currently available meta data.
Other than having clean audio in a format Google can process, there’s probably nothing specific you’ll have to do down the road to get that content transcribed. It may be worth thinking about how your audio content is structured. Completely free-form content, while it certainly has a place, may be harder for Google to evaluate. Is the theme of your podcast and each episode evident? Is there a structure where a machine could potentially parse questions and answers. Are there concise takeaways – maybe a summary at the end of each episode?
Ultimately, audio SEO will mean treating our audio content in a more structured and deliberate way. The broader evolution of Google across many devices also means that we need to be more aware of what type of content best fits our audience’s needs. Is the searcher looking for text, video, or audio? Each modality fits a different need and a different device (or set of devices) in the broader search ecosystem.
If we’re being honest, most of us probably view reporting the same way we view taking out the trash or folding the laundry. It’s a chore that robs us of time we could have spent on more important or enjoyable things.
Adding to the frustration is the reality that many clients don’t even read their reports. That’s right. All that time you put into pulling together your data and the report might be forever resigned to the dusty corner of your client’s inbox.
Hear me out though… have you ever thought of reporting as a client retention tool? While reporting is something that takes your time away from SEO work that moves the needle, reporting is also critical if you want to have a campaign to work on at all.
In other words, no reporting = no value communicated = no more client.
The good news is that the reverse is also true. When we do SEO reporting well, we communicate our value and keep more clients, which is something that every agency and consultant can agree is important.
That all sounds nice, but how can we do that? Throughout my six years at an SEO agency, I picked up some reporting tips that I hope you’ll be able to benefit from as well.
I’ve seen my share of reports that highlighted metrics that just didn’t reflect any of the client’s main objectives. Your clients are busy — the first sight of something irrelevant and they’ll lose interest, so make your reports count!
My process for determining what I should report on is fairly simple:
Identify the business objective
Create an SEO plan that will help achieve that goal
Execute the plan
Report on the metrics that best measure the work I did
In other words, choose appropriate KPIs to match their business objectives and your strategy, and stick to those for your reporting.
2. Set specific goals
You: “Good news! We got 4,000 organic visits last month.”
Client: “Why wasn’t it 5,000?”
If that’s ever happened to you before, you’re not alone.
This simple step is so easy to forget, but make sure your goals are specific and mutually agreed upon before you start! At the beginning of the month, tell your client what your goal is (ex: “We hope to be able to get 4,000 organic visits”). That way, when you review your report, you’ll be able to objectively say whether you missed/hit/exceeded your targets.
3. Eliminate jargon
Your clients are professionals in their own fields, not yours, so make sure to leave the shop-talking to Twitter. Before sending out a report, ask yourself:
Have I defined all potentially confusing metrics? I’ve seen some SEOs include a mini-glossary or analogies to explain some of their charts — I love this! It really helps disambiguate metrics that are easy to misunderstand.
Am I using words that aren’t used outside my own echo chamber? Some phrases become so ubiquitous in our immediate circles that we assume everyone uses them. In many cases, we’re using jargon without even realizing it!
Simply put, use clear language and layman’s terms in your client’s SEO reports. You won’t serve anyone by confusing them.
4. Visualize your data in meaningful ways
I once heard a client describe a report as “pretty, but useless.”
They had a point though. Their report was full of pie charts and line graphs that, while important-looking, conveyed no meaning to them.
Part of that “meaning” comes down to reporting on the metrics your client cares about (see #1), but the other half of that is choosing how you’ll display that information.
Resources like that will help you transform your data from metrics into a story that conveys meaning to your clients, so don’t skimp on this step!
5. Provide insights, not just metrics
I remember the first time someone explained to me the difference between metrics and insights. I was blown away.
It seems so simple now, but in my earliest days in digital marketing, I basically viewed “reporting” as synonymous with “data.” Raw, numeric, mind-numbing data.
The key to making your reports more meaningful to your clients is understanding that pure metrics don’t have intrinsic data. You have to unify the data in meaningful ways and pull out insights that help your client understand not just what the numbers are but why they matter.
I find it helpful to ask “so what?” when going through a report. Client’s ranking on page 1 for this list of keywords? That’s cool, but why should my client care about this? How is it contributing to their goals? Work on answering that question before you communicate your reports.
6. Connect SEO results to revenue
I’m going to be honest, this one is tricky.
First of all, SEO is a few layers removed from conversions. When it comes to “the big three” (as I like to refer to rankings, traffic, and conversions), SEOs can:
Most directly influence rankings
Influence organic traffic, but a little less directly than rankings. For example, organic traffic can go down despite sustained rankings due to things like seasonality.
Influence organic conversions, but even less directly than traffic. Everything from the website design to the product/service itself can affect that.
Second, it can be difficult to connect SEO to revenue especially on websites where the ultimate conversion happens offline (ex: lead gen). In order to tie organic traffic to revenue, you’ll want to set up goal conversions and add a value to those conversions in your analytics, but here’s where that gets difficult:
Clients often don’t know their average LCV (lifetime customer value)
Clients often don’t know their average close rate (the rough percentage of leads that they close)
Clients know, but they don’t want to share this information with you
Everyone has a different reporting methodology, but I personally tend to advocate for at least trying to connect SEO to revenue. I’ve been in enough situations where our client dropped us because they saw us as a cost-center rather than a profit-center to know that communicating your value in monetary terms can mean the difference between keeping your client or not.
Even though you can’t directly influence conversions and even if your client can only give you a rough ballpark figure for LCV and close rate, it’s better than nothing.
7. Be available to fill in the gaps
Not everything can be explained in a report. Even if you’re able to add text commentary to elaborate on your data, there’s still the risk that a key point will be lost on your client completely. Expect this!
I’ve seen plenty of client reporting calls go well over an hour. While no two situations are alike, I think starting with a report that contains clear insights on the KPIs your client cares about will do wonders for shortening that conversation.
Your clients will be able to understand those insights on their own, which frees you up to add context and answer any questions without getting bogged down with back-and-forth over “red herring” metrics that distract from the main point.
I want to hear from you!
What about you? Every SEO has their own reporting best practices, wins, and horror stories — I want to hear yours!
What reporting trick do you have up your sleeve that could help your fellow SEOs save time (& their sanity)?
What’s your biggest reporting struggle and how are you trying to solve it?
What’s an example of a time when reporting played a role in salvaging a client relationship?
We’re in this together — so let’s learn from each other!
And if you want more where this came from, please consider downloading our free whitepaper: High-Impact SEO Reporting for Agencies! It’s full of advice and helpful tips for using reports to communicate value to your clients.
At Fractl, the data makes perfect sense to us: The global amount of digital information is growing exponentially over time.
This means that the “90 percent of all data…” statistic was true in 2013, 2016, and 2018, and it will continue to be true for the foreseeable future. As our culture continues to become more internet-integrated and mobile, we continue to produce massive amounts of data year over year while also becoming more comfortable with understanding large quantities of information.
This is hugely important to anyone who creates content on the web: Stats about how much data we create are great, but the stories buried in that data are what really matter. In the opening manifesto for FiveThirtyEight, one of the first sites on the web specifically devoted to data journalism, Editor-in-Chief Nate Silver wrote:
“Almost everything from our sporting events to our love lives now leaves behind a data trail.”
This type of data has always been of interest to marketers doing consumer research, but the rise of data journalism shows us that there is both consumer demand and almost infinite potential for great storytelling rooted in numbers.
In this post, I’ll highlight four key insights from data science and journalism and how content marketers can leverage them to create truly newsworthy content that stands out from the pack:
The numbers drive the narrative
Plotted points are more trustworthy than written words (especially by brands!)
Great data content is both beautiful and easy-to-interpret
Every company has a (data) story to tell
By the time you’re done, you’ll have gleaned a better understanding of how data visualization, from simple charts to complex interactive graphics, can help them tell a story and achieve wide visibility for their clients.
The numbers drive the narrative
Try Googling “infographics are dead,” and your top hit will be a 2015 think piece asserting that the medium has been dead for years, followed by many responses that the medium isn’t anywhere close to “dead.” These more optimistic articles tend to focus on the key aspects of infographics that have transformed since their popularity initially grew:
Data visualization (and the public’s appetite for it) is evolving, and
A bad data viz in an oversaturated market won’t cut it with overloaded consumers.
For content marketers, the advent of infographics was a dream come true: Anyone with even basic skills in Excel and a good graphic designer could whip up some charts, beautify them, and use them to share stories. But Infographics 1.0 quickly fizzled because they failed to deliver anything interesting — they were just a different way to share the same boring stories.
Data journalists do something very different. Take the groundbreaking work from Reuters on the Rohingya Muslim refugee camps in southern Bangladesh, which was awarded the Global Editors Network Award for Best Data Visualization in 2018. This piece starts with a story—an enormous refugee crisis taking place far away from the West—and uses interactive maps, stacked bar charts, and simple statistics visualizations to contextualize and amplify a heartbreaking narrative.
The Reuters piece isn’t only effective because of its innovative data viz techniques; rather, the piece begins with an extremely newsworthy human story and uses numbers to make sure it’s told in the most emotionally resonant way possible. Content marketers, who are absolutely inundated with advice on how storytelling is essential to their work, need to see data journalism as a way to drive their narratives forward, rather than thinking of data visualization simply as a way to pique interest or enhance credibility.
Plotted points are more trustworthy than written words
This is especially true when it comes to brands.
In the era of #FakeNews, content marketers are struggling more than ever to make sure their content is seen as precise, newsworthy, and trustworthy. The job of a content marketer is to produce work for a brand that can go out and reasonably compete for visibility against nonprofits, think tanks, universities, and mainstream media outlets simultaneously. While some brands are quite trusted by Americans, content marketers may find themselves working with lesser-known clients seeking to build up both awareness and trust through great content.
One of the best ways to do both is to follow the lead of data journalists by letting visual data content convey your story for you.
“Numbers don’t lie” vs. brand trustworthiness
In the buildup to the 2012 election, Nate Silver’s previous iteration of FiveThirtyEight drew both massive traffic to the New York Times and criticism from traditional political pundits, who argued that no “computer” could possibly predict election outcomes better than traditional journalists who had worked in politics for decades (an argument fairly similar to the one faced by the protagonists in Moneyball). In the end, Silver’s “computer” (actually a sophisticated model that FiveThirtyEight explains in great depth and open-sources) predicted every state correctly in 2012.
Silver and his team made the model broadly accessible to show off just how non-partisan it really was. It ingested a huge amount of historical election data, used probabilities and weights to figure out which knowledge was most important, and spit out a prediction as to what the most likely outcomes were. By showing how it all worked, Silver and FiveThirtyEight went a long way toward improving the public confidence in data—and, by extension, data journalism.
So, in the era of endless hot takes and the “everyone’s-a-journalist-now” mentality, content marketers looking to establish brand authority, credibility, and trust can learn an enormous amount from the proven success of data journalists — just stick to the numbers.
Find the nexus of simple and beautiful
Our team at Fractl has a tricky task on our hands: We root our content in data journalism with the ultimate goal of creating great stories that achieve wide visibility. But different stakeholders on our team (not to mention our clients) often want to achieve those ends by slightly different means.
Our creatives—the ones working with data—may want to build something enormously complex that crams as much data as possible into the smallest space they can. Our media relations team—experts in knowing the nuances of the press and what will or won’t appeal to journalists—may want something that communicates data simply and beautifully and can be summed up in one or two sentences, like the transcendent work of Mona Chalabi for the Guardian. A client, too, will often have specific expectations for how a piece should look and what should be included, and these factors need to be considered as well.
Striking the balance
With so many ways to present any given set of numbers, we at Fractl have found success by making data visualizations as complex as they need to be while always aiming for the nexus of simple and beautiful. In other words: Take raw numbers that will be interesting to people, think of a focused way to clearly visualize them, and then create designs that fit the overall sentiment of the piece.
On a campaign for Porch.com, we asked 1,000 Americans several questions about food, focusing on things that were light and humorous conversation starters. For example, “Is a hot dog a sandwich?” and “What do you put on a hot dog?” As a native Chicagoan who believes there is only one way to make a hot dog, this is exactly the type of debate that would make me take notice and share the content with friends on social media.
In response to those two questions, we got numbers that looked like this:
Using Tableau Public, an open-source data reporting solution that is one of the go-to tools for rapid building at Fractl, the tables above were transformed into rough cuts of a final visualization:
With the building blocks in place, we then gave extensive notes to our design team on how to make something that’s just as simple but much, much more attractive. Given the fun nature of this campaign, a more lighthearted design made sense, and our graphics team delivered. The entire campaign is worth checking out for the project manager’s innovative and expert ability to use simple numbers in a way that is beautiful, easy-to-approach, and instantly compelling.
All three of the visualizations above are reporting the exact same data, but only one of them is instantly shareable and keeps a narrative in mind: by creatively showing the food items themselves, our team turned the simple table of percentages in the first figure into a visualization that could be shared on social media or used by a journalist covering the story.
In other cases, such as if the topic is more serious, simple visualizations can be used to devastating effect. In work for a brand in the addiction and recovery space, we did an extensive analysis of open data hosted by the Centers for Disease Control and Prevention. The dramatic increase in drug overdose deaths in the United States is an emotional story fraught with powerful statistics. In creating a piece on the rise in mortality rate, we wanted to make sure we preserved the gravity of the topic and allowed the numbers to speak for themselves:
A key part of this visualization was adding one additional layer of complexity—age brackets—to tell a more contextualized and human story. Rather than simply presenting a single statistic, our team chose to highlight the fact that the increase in overdose deaths is something affecting Americans across the entire lifespan, and the effect of plotting six different lines on a single chart makes the visual point that addiction is getting worse for all Americans.
Every brand’s data has a story to tell
Spotify has more than 200 million global users, nearly half of whom pay a monthly fee to use the service (the other half generate revenue by listening to intermittent ads). As an organization, Spotify has data on how a sizeable portion of the world listens to its music and the actual characteristics of that music.
Data like this is what makes Spotify such a valuable brand from a dollars and cents standpoint, but a team of data journalists at The New York Times also saw an incredible story about how American music taste has changed in the last 30 years buried in Spotify’s data. The resulting piece, Why Songs of Summer Sound the Same, is a landmark work of data-driven, interactive journalism, and one that should set a content marketer’s head spinning with ideas.
For example, GoodRx, a platform that reports pricing data from more than 70,000 U.S. pharmacies, released a white paper and blog post that compared its internal data on prescription fills with US Census data on income and poverty. While census data is free, only GoodRx had the particular dataset on pharmacy fills—it’s their own proprietary data set. Data like this is obviously key to their overall valuation, but the way in which it was reported here told a deeply interesting story about income and access to medication without giving away anything that could potentially cost the firm. The report was picked up by the New York Times, undoubtedly boosting GoodRx’s ratings for organic search.
The Times’ pieces on Spotify and GoodRx both highlight the fourth key insight on the effective use of data as content marketers: Every brand’s data has a story to tell. These pieces could only have come from their exact sources because only they had access to the data, making the particular findings singular and unique to that specific brand and presenting a key competitive advantage in the content landscape. While working with internal data comes with its own potential pitfalls and challenges, seeking to collaborate with a client to select meaningful internal data and directing its subsequent use for content and narrative should be at the forefront of a content marketer’s mind.
Blurring lines and breaking boundaries
A fascinating piece recently on Recode sought to slightly reframe the high-publicity challenges facing journalists, stating:
“The plight of journalists might not be that bad if you’re willing to consider a broader view of ‘journalism.’”
The piece detailed that while job postings for journalists are off more than 10 percent since 2004, jobs broadly related to “content” have nearly quadrupled over the same time period. Creatives will always flock to the options that allow them to make what they love, and with organic search largely viewed as a meritocracy of content, the opportunities for brands and content marketers to utilize the data journalism toolkit have never been greater.
What’s more, much of the best data journalism out there typically only uses a handful of visualizations to get its point across. It was also reported recently that the median amount of data sources for pieces created by the New York Times and The Washington Post was two. It too is worth noting that more than 60 percent of data journalism stories in both the Times and Post during a recent time period (January-June, 2017) relied only on government data.
Ultimately, the ease of running large surveys via a platform like Prolific Research, Qualtrics, or Amazon Mechanical Turk, coupled with the ever-increasing number of free and open data sets provided by both the US Government or sites like Kaggle or data.world means that there is no shortage of numbers out there for content marketers to dig into and use to drive storytelling. The trick is in using the right blend of hard data and more ethereal emotional appeal to create a narrative that is truly compelling.
As brands increasingly invest in content as a means to propel organic search and educate the public, content marketers should seriously consider putting these key elements of data journalism into practice. In a world of endless spin and the increasing importance of showing your work, it’s best to remember the famous quote written by longtime Guardian editor C.P. Scott in 1921: “Comment is free, but facts are sacred.”
What do you think? How do you and your team leverage data journalism in your content marketing efforts?
When you publish new content, you want users to find it ranking in search results as fast as possible. Fortunately, there are a number of tips and tricks in the SEO toolbox to help you accomplish this goal. Sit back, turn up your volume, and let Cyrus Shepard show you exactly how in this week’s Whiteboard Friday.
[Note: #3 isn’t covered in the video, but we’ve included in the post below. Enjoy!]
Click on the whiteboard image above to open a high-resolution version in a new tab!
Howdy, Moz fans. Welcome to another edition of Whiteboard Friday. I’m Cyrus Shepard, back in front of the whiteboard. So excited to be here today. We’re talking about ten tips to index and rank new content faster.
You publish some new content on your blog, on your website, and you sit around and you wait. You wait for it to be in Google’s index. You wait for it to rank. It’s a frustrating process that can take weeks or months to see those rankings increase. There are a few simple things we can do to help nudge Google along, to help them index it and rank it faster. Some very basic things and some more advanced things too. We’re going to dive right in.
1. URL Inspection / Fetch & Render
So basically, indexing content is not that hard in Google. Google provides us with a number of tools. The simplest and fastest is probably the URL Inspection tool. It’s in the new Search Console, previously Fetch and Render. As of this filming, both tools still exist. They are depreciating Fetch and Render. The new URL Inspection tool allows you to submit a URL and tell Google to crawl it. When you do that, they put it in their priority crawl queue. That just simply means Google has a list of URLs to crawl. It goes into the priority, and it’s going to get crawled faster and indexed faster.
Another common technique is simply using sitemaps. If you’re not using sitemaps, it’s one of the easiest, quickest ways to get your URLs indexed. When you have them in your sitemap, you want to let Google know that they’re actually there. There’s a number of different techniques that can actually optimize this process a little bit more.
The first and the most basic one that everybody talks about is simply putting it in your robots.txt file. In your robots.txt, you have a list of directives, and at the end of your robots.txt, you simply say sitemap and you tell Google where your sitemaps are. You can do that for sitemap index files. You can list multiple sitemaps. It’s really easy.
You can also do it using the Search Console Sitemap Report, another report in the new Search Console. You can go in there and you can submit sitemaps. You can remove sitemaps, validate. You can also do this via the Search Console API.
But a really cool way of informing Google of your sitemaps, that a lot of people don’t use, is simply pinging Google. You can do this in your browser URL. You simply type in google.com/ping, and you put in the sitemap with the URL. You can try this out right now with your current sitemaps. Type it into the browser bar and Google will instantly queue that sitemap for crawling, and all the URLs in there should get indexed quickly if they meet Google’s quality standard.
(BONUS: This wasn’t in the video, but we wanted to include it because it’s pretty awesome)
Within the past few months, both Google and Bing have introduced new APIs to help speed up and automate the crawling and indexing of URLs.
Both of these solutions allow for the potential of massively speeding up indexing by submitting 100s or 1000s of URLs via an API.
While the Bing API is intended for any new/updated URL, Google states that their API is specifically for “either job posting or livestream structured data.” That said, many SEOs like David Sottimano have experimented with Google APIs and found it to work with a variety of content types.
If you want to use these indexing APIs yourself, you have a number of potential options:
That’s talking about indexing. Now there are some other ways that you can get your content indexed faster and help it to rank a little higher at the same time.
4. Links from important pages
When you publish new content, the basic, if you do nothing else, you want to make sure that you are linking from important pages. Important pages may be your homepage, adding links to the new content, your blog, your resources page. This is a basic step that you want to do. You don’t want to orphan those pages on your site with no incoming links.
Adding the links tells Google two things. It says we need to crawl this link sometime in the future, and it gets put in the regular crawling queue. But it also makes the link more important. Google can say, “Well, we have important pages linking to this. We have some quality signals to help us determine how to rank it.” So linking from important pages.
5. Update old content
But a step that people oftentimes forget is not only link from your important pages, but you want to go back to your older content and find relevant places to put those links. A lot of people use a link on their homepage or link to older articles, but they forget that step of going back to the older articles on your site and adding links to the new content.
Now what pages should you add from? One of my favorite techniques is to use this search operator here, where you type in the keywords that your content is about and then you do a site:example.com. This allows you to find relevant pages on your site that are about your target keywords, and those make really good targets to add those links to from your older content.
6. Share socially
Really obvious step, sharing socially. When you have new content, sharing socially, there’s a high correlation between social shares and content ranking. But especially when you share on content aggregators, like Reddit, those create actual links for Google to crawl. Google can see those signals, see that social activity, sites like Reddit and Hacker News where they add actual links, and that does the same thing as adding links from your own content, except it’s even a little better because it’s external links. It’s external signals.
7. Generate traffic to the URL
This is kind of an advanced technique, which is a little controversial in terms of its effectiveness, but we see it anecdotally working time and time again. That’s simply generating traffic to the new content.
Now there is some debate whether traffic is a ranking signal. There are some old Google patents that talk about measuring traffic, and Google can certainly measure traffic using Chrome. They can see where those sites are coming from. But as an example, Facebook ads, you launch some new content and you drive a massive amount of traffic to it via Facebook ads. You’re paying for that traffic, but in theory Google can see that traffic because they’re measuring things using the Chrome browser.
When they see all that traffic going to a page, they can say, “Hey, maybe this is a page that we need to have in our index and maybe we need to rank it appropriately.”
Once we get our content indexed, talk about a few ideas for maybe ranking your content faster.
8. Generate search clicks
Along with generating traffic to the URL, you can actually generate search clicks.
Now what do I mean by that? So imagine you share a URL on Twitter. Instead of sharing directly to the URL, you share to a Google search result. People click the link, and you take them to a Google search result that has the keywords you’re trying to rank for, and people will search and they click on your result.
You see television commercials do this, like in a Super Bowl commercial they’ll say, “Go to Google and search for Toyota cars 2019.” What this does is Google can see that searcher behavior. Instead of going directly to the page, they’re seeing people click on Google and choosing your result.
This does a couple of things. It helps increase your click-through rate, which may or may not be a ranking signal. But it also helps you rank for auto-suggest queries. So when Google sees people search for “best cars 2019 Toyota,” that might appear in the suggest bar, which also helps you to rank if you’re ranking for those terms. So generating search clicks instead of linking directly to your URL is one of those advanced techniques that some SEOs use.
9. Target query deserves freshness
When you’re creating the new content, you can help it to rank sooner if you pick terms that Google thinks deserve freshness. It’s best maybe if I just use a couple of examples here.
Consider a user searching for the term “cafes open Christmas 2019.” That’s a result that Google wants to deliver a very fresh result for. You want the freshest news about cafes and restaurants that are going to be open Christmas 2019. Google is going to preference pages that are created more recently. So when you target those queries, you can maybe rank a little faster.
Compare that to a query like “history of the Bible.” If you Google that right now, you’ll probably find a lot of very old pages, Wikipedia pages. Those results don’t update much, and that’s going to be harder for you to crack into those SERPs with newer content.
The way to tell this is simply type in the queries that you’re trying to rank for and see how old the most recent results are. That will give you an indication of what Google thinks how much freshness this query deserves. Choose queries that deserve a little more freshness and you might be able to get in a little sooner.
10. Leverage URL structure
Finally, last tip, this is something a lot of sites do and a lot of sites don’t do because they’re simply not aware of it. Leverage URL structure. When Google sees a new URL, a new page to index, they don’t have all the signals yet to rank it. They have a lot of algorithms that try to guess where they should rank it. They’ve indicated in the past that they leverage the URL structure to determine some of that.
Consider The New York Times puts all its book reviews under the same URL, newyorktimes.com/book-reviews. They have a lot of established ranking signals for all of these URLs. When a new URL is published using the same structure, they can assign it some temporary signals to rank it appropriately.
If you have URLs that are high authority, maybe it’s your blog, maybe it’s your resources on your site, and you’re leveraging an existing URL structure, new content published using the same structure might have a little bit of a ranking advantage, at least in the short run, until Google can figure these things out.
These are only a few of the ways to get your content indexed and ranking quicker. It is by no means a comprehensive list. There are a lot of other ways. We’d love to hear some of your ideas and tips. Please let us know in the comments below. If you like this video, please share it for me. Thanks, everybody.
Your organic result game is on point, but you’ve been hearing a lot of chatter about SERP features and are curious if they can help grow your site’s visibility — how do you find out? Our SERP Features dashboard will be your one-stop shop for everything feature-related.
Here’s a step-by-step guide on how you can use the dashboard to suss out a SERP feature strategy that’s right for your site.
1. Establish viable sites and segments
For context, let’s say that we’re working for a large supermarket chain with locations across the globe. Once in the dashboard, we’ll immediately look to the Overview module, which will give us a strong indication of whether a SERP feature strategy is viable for any of our keyword segments. We may just find that organic is the road best travelled.
Clicking through our segments, we stumble across one that’s driving a huge amount of share of voice — an estimated 309.8 million views, which is actually up by 33.4 million over the 30-day average.
Since the green section of the chart represents organic share of voice and the grey represents SERP feature share of voice, right away we can see that features are creating a huge amount of visibility. Surprisingly, even more than regular ol’ organic results.
By hovering over each segment of the chart, we can see their exact breakdowns. SERP features are driving a whopping 188.2 million eyeballs, up by 18 million over the 30-day average, while organic results are driving only 121.6 million, having also gained share of voice along the way.
We’re confident that a SERP feature strategy is worth exploring for this segment.
2. Get a lay of the SERP feature landscape
Next, we want to know what the SERP features appearing in our space are, and whether they make sense for us to tackle.
As a supermarket chain, not only do we sell fresh eats from our brick-and-mortar stores, but our site also has a regularly updated blog with delectable recipes, so we’ve got a few SERP features already in mind (can anyone say places and recipe results?).
But, if for some strange reason our SERPs are full of flights and jobs, maybe we’ll move onto a segment that we can have more impact on, and check in on this one another time.
To see what we’re working with, we head to the [Current Day] SERP Features chart, make sure every feature is enabled in the legend, and select SoV: Total from the dropdown, which will show us the total share of voice generated by each feature appearing on our SERPs.
Carousels and knowledge graphs — features that we have little or no control over — might be next on the list, but the ones trailing them aren’t far behind and are winnable. So, we’ll pick our favourite five — places, recipes, list snippets, “People also ask” boxes, and paragraph snippets — to build strategies around, and make sure only they appear on our chart.
Since food and food-related activities tend to be heavy on the visuals, it wouldn’t be wise for us to neglect images and videos entirely, so we’ll also enable them just to creep on. (We’ll think of recipes and AMP recipes as one, and make a mental note to look into an overall AMP strategy at some point.)
But, before we ride off into the sunset with our SERP features just yet, we still need to do a little more research to see whether they’re a long-term relationship option or a mere flash in the pan.
To do this, we look to the SERP Features Over Time chart, take the SoV: Total metric with us, and select a date-range wide enough to give us a good idea of their past behavior. Ideally, we’d love to see that they’re making continual progress.
As far as the result types that we care about go, “People also ask” boxes and places appear for most of our keywords, and more keywords to optimize for means more time and effort.
We’re absolutely tickled pink to see that a relatively small number of keywords are responsible for producing all that recipes share of voice — this is the feature we’ll probably want to start with.
To get these groups of keywords, we’ll simply click the SERP feature icons along the bottom of the chart and voila! We’ll see a filtered view of them appear in the Keywords tab, allowing us to create individual tags for them. This way, we can monitor them more closely.
Now we can perform some SEO magic.
4. Chart your daily progress against general trends
As we optimize for our various SERP features, not only can we track our progress, but we can keep an eye on the general happenings of features on our SERPs.
We’ll use modules in the Share of Voice: SERP Features panel for these quick health-checks, customizing them to show only our chosen SERP features, which will make unearthing these insights even easier.
The Top Increases/Decreases module shows us that places, PAAs, and paragraph snippets have gained the most share of voice on our SERPs. The metric for each feature tells us exactly how much movement has been made between the current day and the segment’s 30-day average.
In other words, the overall health of the features we’ve put our lot in with is doing well. And snagging one of them could mean more share of voice than we’d originally anticipated.
We’ll keep an eye here to make sure that our features continue to trend up on the SERPs.
But how are we doing?
The Your Top Gains/Losses module tells us that our hard work is paying off for places packs. Not only has this result type grown in influence on the SERPs in general, but we’ve managed to increase our share. Woo!
And while we’ve only made a smidgen of improvement with recipes, it’s still better than the none we had before.
And finally, since our biggest growing SERP feature for the day isn’t necessarily what drives most of our site visibility, we’ll take a quick peek at the Your Primary Source of SoV module to see who our SERP feature superstar is.
We’ll watch the needle to see if we keep making gains — we’re currently only owning an estimated 1.7 million views out of an available 60.5 million — or see whether another SERP feature appears here, usurping places as our top earner.
5. Keep track of ownership over the long-haul
Daily progress reports are great, but we’ll also need a running tally of our successes (and failures) to help us zero-in on when and why things were (or weren’t) working for us.
To do this, we’ll go to the SERP Features Over Time chart, set our metric to Count: Owned and our date-range to whenever we’re curious about, and see how the number of keywords with features that we own has been trending during that period.
Exciting secrets can be so hard to keep. Finally, all of us at Moz have the green light to share with all of you a first glimpse of something we’ve been working on for months behind the scenes. Big inhale, big exhale…
Announcing: the new and improved Moz Local, to be rolled out beginning June 12!
Why is Moz updating the Moz Local platform?
Local search has evolved from caterpillar to butterfly in the seven years since we launched Moz Local. I think we’ve spent the time well, intensively studying both Google’s trajectory and the feedback of enterprise, marketing agency, and SMB customers.
Your generosity in telling us what you need as marketers has inspired us to action. Over the coming months, you’ll be seeing what Moz has learned reflected in a series of rollouts. Stage by stage, you’ll see that we’re planning to give our software the wings it needs to help you fully navigate the dynamic local search landscape and, in turn, grow your business.
We hope you’ll keep gathering together with us to watch Moz Local take full flight — changes will only become more robust as we move forward.
What can I expect from this upgrade?
Beginning June 12th, Moz Local customers will experience a fresh look and feel in the Moz Local interface, plus these added capabilities:
New distribution partners to ensure your data is shared on the platforms that matter most in the evolving local search ecosystem
Listing status and real-time updates to know the precise status of your location data
Automated detection and permanent duplicate closure, taking the manual work out of the process and saving you significant time
Integrations with Google and Facebook to gain deeper insights, reporting, and management for your location’s profiles
An even better data clean-up process to ensure valid data is formatted properly for distribution
A new activity feed to alert you to any changes to your location’s listings
A suggestion engine to provide recommendations to increase accuracy, completeness, and consistency of your location data
Additional features available include:
Managing reviews of your locations to keep your finger on the pulse of what customers are saying
Social posting to engage with consumers and alert them to news, offers, and other updates
Store locator and landing pages to share location data easily with both customers and search engines (available for Moz Local customers with 100 or more locations)
Remember, this is just the beginning. There’s more to come in 2019, and you can expect ongoing communications from us as further new feature sets emerge!
When is it happening?
We’ll be rolling out all the new changes beginning on June 12th. As with some large changes, this update will take a few days to complete, so some people will see the changes immediately while for others it may take up to a week. By June 21st, everyone should be able to explore the new Moz Local experience!
Don’t worry — we’ll have several more communications between now and then to help you prepare. Keep an eye out for our webinar and training materials to help ensure a smooth transition to the new Moz Local.
Are any metrics/scores changing?
Some of our reporting metrics will look different in the new Moz Local. We’ll be sharing more information on these metrics and how to use them soon, but for now, here’s a quick overview of changes you can expect:
Profile Completeness: Listing Score will be replaced by the improved Profile Completeness metric. This new feature will give you a better measurement of how complete your data is, what’s missing from it, and clear prompts to fill in any lacking information.
Improved listing status reporting: Partner Accuracy Score will be replaced by improved reporting on listing status with all of our partners, including continuous information about the data they’ve received from us. You’ll be able to access an overview of your distribution network, so that you can see which sites your business is listed on. Plus, you’ll be able to go straight to the live listing with a single click.
Visibility Index: Though they have similar names, Visibility Score is being replaced by something slightly different with the new and improved Visibility Index, which notates how the data you’ve provided us about a location matches or mismatches your information on your live listings.
New ways to measure and act on listing reach: Reach Score will be leaving us in favor of even more relevant measurement via the Visibility Index and Profile Completeness metrics. The new Moz Local will include more actionable information to ensure your listings are accurate and complete.
As a veteran local SEO, I’m finding the developments taking place with our software particularly exciting because, like you, I see how local search and local search marketing have matured over the past decade.
I’ve closely watched the best minds in our industry moving toward a holistic vision of how authenticity, customer engagement, data, analysis, and other factors underpin local business success. And we’ve all witnessed Google’s increasingly sophisticated presentation of local business information evolve and grow. It’s been quite a ride!
At every level of local commerce, owners and marketers deserve tools that bring order out of what can seem like chaos. We believe you deserve software that yields strategy. As our CEO, Sarah Bird, recently said of Moz,
“We are big believers in the power of local SEO.”
So the secret is finally out, and you can see where Moz is heading with the local side of our product lineup. It’s our serious plan to devote everything we’ve got into putting the power of local SEO into your hands.
In 2018, Google reported an incredible 3,234 improvements to search. That’s more than 8 times the number of updates they reported in 2009 — less than a decade ago — and an average of almost 9 per day. How have algorithm updates evolved over the past decade, and how can we possibly keep tabs on all of them? Should we even try?
To kick this off, here’s a list of every confirmed count we have (sources at end of post):
2018 – 3,234 “improvements”
2017 – 2,453 “changes”
2016 – 1,653 “improvements”
2013 – 890 “improvements”
2012 – 665 “launches”
2011 – 538 “launches”
2010 – 516 “changes”
2009 – 350–400 “changes”
Unfortunately, we don’t have confirmed data for 2014-2015 (if you know differently, please let me know in the comments).
A brief history of update counts
Our first peek into this data came in spring of 2010, when Google’s Matt Cutts revealed that “on average, [Google] tends to roll out 350–400 things per year.” It wasn’t an exact number, but given that SEOs at the time (and to this day) were tracking at most dozens of algorithm changes, the idea of roughly one change per day was eye-opening.
In fall of 2011, Eric Schmidt was called to testify before Congress, and revealed our first precise update count and an even more shocking scope of testing and changes:
“To give you a sense of the scale of the changes that Google considers, in 2010 we conducted 13,311 precision evaluations to see whether proposed algorithm changes improved the quality of its search results, 8,157 side-by-side experiments where it presented two sets of search results to a panel of human testers and had the evaluators rank which set of results was better, and 2,800 click evaluations to see how a small sample of real-life Google users responded to the change. Ultimately, the process resulted in 516 changes that were determined to be useful to users based on the data and, therefore, were made to Google’s algorithm.”
Later, Google would reveal similar data in an online feature called “How Search Works.” Unfortunately, some of the earlier years are only available via the Internet Archive, but here’s a screenshot from 2012:
Note that Google uses “launches” and “improvements” somewhat interchangeably. This diagram provided a fascinating peek into Google’s process, and also revealed a startling jump from 13,311 precisions evaluations (changes that were shown to human evaluators) to 118,812 in just two years.
Is the Google algorithm heating up?
Since MozCast has kept the same keyword set since almost the beginning of data collection, we’re able to make some long-term comparisons. The graph below represents five years of temperatures. Note that the system was originally tuned (in early 2012) to an average temperature of 70°F. The redder the bar, the hotter the temperature …
You’ll notice that the temperature ranges aren’t fixed — instead, I’ve split the label into eight roughly equal buckets (i.e. they represent the same number of days). This gives us a little more sensitivity in the more common ranges.
The trend is pretty clear. The latter half of this 5-year timeframe has clearly been hotter than the first half. While warming trend is evident, though, it’s not a steady increase over time like Google’s update counts might suggest. Instead, we see a stark shift in the fall of 2016 and a very hot summer of 2017. More recently, we’ve actually seen signs of cooling. Below are the means and medians for each year (note that 2014 and 2019 are partial years):
2019 – 83.7° / 82.0°
2018 – 89.9° / 88.0°
2017 – 94.0° / 93.7°
2016 – 75.1° / 73.7°
2015 – 62.9° / 60.3°
2014 – 65.8° / 65.9°
Note that search engine rankings are naturally noisy, and our error measurements tend to be large (making day-to-day changes hard to interpret). The difference from 2015 to 2017, however, is clearly significant.
Are there really 9 updates per day?
No, there are only 8.86 – feel better? Ok, that’s probably not what you meant. Even back in 2009, Matt Cutts said something pretty interesting that seems to have been lost in the mists of time…
“We might batch [algorithm changes] up and go to a meeting once a week where we talk about 8 or 10 or 12 or 6 different things that we would want to launch, but then after those get approved … those will roll out as we can get them into production.”
In 2016, I did a study of algorithm flux that demonstrated a weekly pattern evident during clearer episodes of ranking changes. From a software engineering standpoint, this just makes sense — updates have to be approved and tend to be rolled out in batches. So, while measuring a daily average may help illustrate the rate of change, it probably has very little basis in the reality of how Google handles algorithm updates.
Do all of these algo updates matter?
Some changes are small. Many improvements are likely not even things we in the SEO industry would consider “algorithm updates” — they could be new features, for example, or UI changes.
As SERP verticals and features evolve, and new elements are added, there are also more moving parts subject to being fixed and improved. Local SEO, for example, has clearly seen an accelerated rate of change over the past 2-3 years. So, we’d naturally expect the overall rate of change to increase.
A lot of this is also in the eye of the beholder. Let’s say Google makes an update to how they handle misspelled words in Korean. For most of us in the United States, that change isn’t going to be actionable. If you’re a Korean brand trying to rank for a commonly misspelled, high-volume term, this change could be huge. Some changes also are vertical-specific, representing radical change for one industry and little or no impact outside that niche.
On the other hand, you’ll hear comments in the industry along the lines of “There are 3,000 changes per year; stop worrying about it!” To me that’s like saying “The weather changes every day; stop worrying about it!” Yes, not every weather report is interesting, but I still want to know when it’s going to snow or if there’s a tornado coming my way. Recognizing that most updates won’t affect you is fine, but it’s a fallacy to stretch that into saying that no updates matter or that SEOs shouldn’t care about algorithm changes.
Ultimately, I believe it helps to know when major changes happen, if only to understand whether rankings shifted due something we did or something Google did. It’s also clear that the rate of change has accelerated, no matter how you measure it, and there’s no evidence to suggest that Google is slowing down.
In this article, I’ll be taking a fresh look at PWAs. As well as exploring implications for both SEO and usability, I’ll be showcasing some modern frameworks and build tools which you may not have heard of, and suggesting ways in which we need to adapt if we’re to put ourselves at the technological forefront of the web.
1. Recap: PWAs, SPAs, and service workers
Progressive Web Apps are essentially websites which provide a user experience akin to that of a native app. Features like push notifications enable easy re-engagement with your audience, while users can add their favorite sites to their home screen without the complication of app stores. PWAs can continue to function offline or on low-quality networks, and they allow a top-level, full-screen experience on mobile devices which is closer to that offered by native iOS and Android apps.
Best of all, PWAs do this while retaining – and even enhancing – the fundamentally open and accessible nature of the web. As suggested by the name they are progressive and responsive, designed to function for every user regardless of their choice of browser or device. They can also be kept up-to-date automatically and — as we shall see — are discoverable and linkable like traditional websites. Finally, it’s not all or nothing: existing websites can deploy a limited subset of these technologies (using a simple service worker) and start reaping the benefits immediately.
The spec is still fairly young, and naturally, there are areas which need work, but that doesn’t stop them from being one of the biggest advancements in the capabilities of the web in a decade. Adoption of PWAs is growing rapidly, and organizations are discovering the myriad of real-world business goals they can impact.
You can read more about the features and requirements of PWAs over on Google Developers, but two of the key technologies which make PWAs possible are:
Service Workers: A special script that your browser runs in the background, separate from your page. It essentially acts as a proxy, intercepting and handling network requests from your page programmatically.
Note that these technologies are not mutually exclusive; the single page app model (brought to maturity with AngularJS in 2010) obviously predates service workers and PWAs by some time. As we shall see, it’s also entirely possible to create a PWA which isn’t built as a single page app. For the purposes of this article, however, we’re going to be focusing on the ‘typical’ approach to developing modern PWAs, exploring the SEO implications — and opportunities — faced by teams that choose to join the rapidly-growing number of organizations that make use of the two technologies described above.
We’ll start with the app shell architecture and the rendering implications of the single page app model.
2. The app shell architecture
// Run this in your console to modify the URL in your
// browser - note that the page doesn't actually reload.
history.pushState(null, "Page 2", "/page2.html");
The bigger problem facing SEO today is actually much easier to understand: rendering content, namely when and how it gets done.
Note that when I refer to rendering here, I’m referring to the process of constructing the HTML. We’re focusing on how the actual content gets to the browser, not the process of drawing pixels to the screen.
In the early days of the web, things were simpler on this front. The server would typically return all the HTML that was necessary to render a page. Nowadays, however, many sites which utilize a single page app framework deliver only minimal HTML from the server and delegate the heavy lifting to the client (be that a user or a bot). Given the scale of the web this requires a lot of time and computational resource, and as Google made clear at its I/O conference in 2018, this poses a major problem for search engines:
On larger sites, this second wave of indexation can sometimes be delayed for several days. On top of this, you are likely to encounter a myriad of problems with crucial information like canonical tags and metadata being missed completely. I would highly recommend watching the video of Google’s excellent talk on this subject for a rundown of some of the challenges faced by modern search crawlers.
But server-side rendering is a concept which is frequently misunderstood…
“Implement server-side rendering”
This is a common SEO audit recommendation which I often hear thrown around as if it were a self-contained, easily-actioned solution. At best it’s an oversimplification of an enormous technical undertaking, and at worst it’s a misunderstanding of what’s possible/necessary/beneficial for the website in question. Server-side rendering is an outcome of many possible setups and can be achieved in many different ways; ultimately, though, we’re concerned with getting our server to return static HTML.
So, what are our options? Let’s break down the concept of server-side rendered content a little and explore our options. These are the high-level approaches which Google outlined at the aforementioned I/O conference:
Dynamic Rendering — Here, normal browsers get the ‘standard’ web app which requires client-side rendering while bots (such as Googlebot and social media services) are served with static snapshots. This involves adding an additional step onto your server infrastructure, namely a service which fetches your web app, renders the content, then returns that static HTML to bots based on their user agent (i.e. UA sniffing). Historically this was done with a service like PhantomJS (now deprecated and no longer developed), while today Puppeteer (headless Chrome) can perform a similar function. The main advantage is that it can often be bolted into your existing infrastructure.
The latter is cleaner, doesn’t involve UA sniffing, and is Google’s long-term recommendation. It’s also worth clarifying that ‘hybrid rendering’ is not a single solution — it’s an outcome of many possible approaches to making static prerendered content available server-side. Let’s break down how a couple of ways such an outcome can be achieved.
So, what other options are there? If you can’t justify the time or expense of a full isomorphic setup, or if it’s simply overkill for what you’re trying to achieve, are there any other ways you can reap the benefits of the single page app model — and hybrid rendering setup — without sabotaging your SEO?
Having rendered content available server-side doesn’t necessarily mean that the rendering process itself needs to happen on the server. All we need is for rendered HTML to be there, ready to serve to the client; the rendering process itself can happen anywhere you like. With a JAMstack approach, rendering of your content into HTML happens as part of your build process.
A great example is GatsbyJS, which is built in React and GraphQL. I won’t go into too much detail, but I would encourage everyone who’s read this far to check out their homepage and excellent documentation. It’s a well-supported tool with a reasonable learning curve, an active community (a feature-packed v2.0 was released in September), an extensible plugin-based architecture, rich integrations with many CMSs, and it allows developers to utilize modern frameworks like React without sabotaging their SEO. There’s also Gridsome, based on VueJS, and React Static which — you guessed it — uses React.
Enterprise-level adoption of these platforms looks set to grow; GatsbyJS was used by Nike for their Just Do It campaign, Airbnb for their engineering site airbnb.io, and Braun have even used it to power a major e-commerce site. Finally, our friends at SEOmonitor used it to power their new website.
3. Service Workers
First of all, I should clarify that the two technologies we’re exploring — SPAs and service workers — are not mutually exclusive. Together they underpin what we commonly refer to as a Progressive Web App, yes, but it’s also possible to have a PWA which isn’t an SPA. You could also integrate a service worker into a traditional static website (i.e. one without any client-side rendered content), which is something I believe we’ll see happening a lot more in the near future. Finally, service workers operate in tandem with other technologies like the Web App Manifest, something that my colleague Maria recently explored in more detail in her excellent guide to PWAs and SEO.
Ultimately, though, it is service workers which make the most exciting features of PWAs possible. They’re one of the most significant changes to the web platform in its history, and everyone whose job involves building, maintaining, or auditing a website needs to be aware of this powerful new set of technologies. If, like me, you’ve been eagerly checking Jake Archibald’s Is Service Worker Ready page for the last couple of years and watching as adoption by browser vendors has grown, you’ll know that the time to start building with service workers is now.
We’re going to explore what they are, what they can do, how to implement them, and what the implications are for SEO.
What can service workers do?
Intercepting network requests and deciding what to do with them programmatically. The worker might go to network as normal, or it might rely solely on the cache. It could even fabricate an entirely new response from a variety of sources. That includes constructing HTML.
Handling push notifications, similar to a native app. This means websites can get permission from users to deliver notifications, then rely on the service worker to receive messages and execute them even when the browser is closed.
Executing background sync, deferring network operations until connectivity has improved. This might be an ‘outbox’ for a webmail service or a photo upload facility. No more “request failed, please try again later” – the service worker will handle it for you at an appropriate time.
The benefits of these kinds of features go beyond the obvious usability perks. As well as driving adoption of HTTPS across the web (all the major browsers will only register service workers on the secure protocol), service workers are transformative when it comes to speed and performance. They underpin new approaches and ideas like Google’s PRPL Pattern, since we can maximize caching efficiency and minimize reliance on the network. In this way, service workers will play a key role in making the web fast and accessible for the next billion web users.
So yeah, they’re an absolute powerhouse.
Implementing a service worker
Rather than doing a bad job of writing a basic tutorial here, I’m instead going to link to some key resources. After all, you are in the best position to know how deep your understanding of service workers needs to be.
They key thing to understand — and which you’ll realize very quickly once you start experimenting — is that service workers hand over an incredible level of control to developers. Unlike previous attempts to solve the connectivity conundrum (such as the ill-fated AppCache), service workers don’t enforce any specific patterns on your work; they’re a set of tools for you to write your own solutions to the problems you’re facing.
One consequence of this is that they can be very complex. Registering and installing a service worker is not a simple exercise, and any attempts to cobble one together by copy-pasting from StackExchange are doomed to failure (seriously, don’t do this). There’s no such thing as a ready-made service worker for your site — if you’re to author a suitable worker, you need to understand the infrastructure, architecture, and usage patterns of your website. Uncle Ben, ever the web development guru, said it best: with great power comes great responsibility.
One last thing: you’ll probably be surprised how many sites you visit are already using a service worker. Head to chrome://serviceworker-internals/ in Chrome or about:debugging#workers in Firefox to see a list.
Service workers and SEO
In terms of SEO implications, the most relevant thing about service workers is probably their ability to hijack requests and modify or fabricate responses using the Fetch API. What you see in ‘View Source’ and even on the Network tab is not necessarily a representation of what was returned from the server. It might be a cached response or something constructed by the service worker from a variety of different sources.
Now run a curl request for the same URL (https://www.gatsbyjs.org/docs/), or fetch the page using Screaming Frog. All the content is there, along with proper title tags, canonicals, and everything else you might expect from a page rendered server-side. This is what a crawler like Googlebot will see too.
This is because the website uses hybrid rendering and a service worker — installed in your browser — is handling subsequent navigation events. There is no need for it to fetch the raw HTML for the Docs page from the server because the client-side application is already up-and-running – thus, View Source shows you what the service worker returned to the application, not what the network returned. Additionally, these pages can be reloaded while you’re offline thanks to the service worker’s effective use of the cache.
You can easily spot which responses came from the service worker using the Network tab — note the ‘from ServiceWorker’ line below.
On the Application tab, you can see the service worker which is running on the current page along with the various caches it has created. You can disable or bypass the worker and test any of the more advanced functionality it might be using. Learning how to use these tools is an extremely valuable exercise; I won’t go into details here, but I’d recommend studying Google’s Web Fundamentals tutorial on debugging service workers.
I’ve made a conscious effort to keep code snippets to a bare minimum in this article, but grant me this one. I’ve put together an example which illustrates how a simple service worker might use the Fetch API to handle requests and the degree of control which we’re afforded:
I hope that this (hugely simplified and non-production ready) example illustrates a key point, namely that we have extremely granular control over how resource requests are handled. In the example above we’ve opted for a simple try-cache-first, fall-back-to-network, fall-back-to-custom-page pattern, but the possibilities are endless. Developers are free to dictate how requests should be handled based on hostnames, directories, file types, request methods, cache freshness, and loads more. Responses – including entire pages – can be fabricated by the service worker. Jake Archibald explores some common methods and approaches in his Offline Cookbook.
The time to learn about the capabilities of service workers is now. The skillset required for modern technical SEO has a fair degree of overlap with that of a web developer, and today, a deep understanding of the dev tools in all major browsers – including service worker debugging – should be regarded as a prerequisite.
4. Wrapping Up
SEOs need to adapt
Until recently, it’s been too easy to get away with not understanding the consequences and opportunities posed by PWAs and service workers.
Instead of criticizing 404 handling or internal linking of a single page app framework, for example, it would be far better to be able to offer meaningful recommendations which are grounded in an understanding of how they actually work. As Jono Alderson observed in his talk on the Democratization of SEO, contributions to open source projects are more valuable in spreading appreciation and awareness of SEO than repeatedly fixing the same problems on an ad-hoc basis.
One last thing I’d like to mention: PWAs are such a transformative set of technologies that they obviously have consequences which reach far beyond just SEO. Other areas of digital marketing are directly impacted too, and from my standpoint, one of the most interesting is analytics.
If your website is partially or fully functional while offline, have you adapted your analytics setup to account for this? If push notification subscriptions are a KPI for your website, are you tracking this as a goal? Remembering that service workers do not have access to the Window object, tracking these events is not possible with ‘normal’ tracking code. Instead, it’s necessary to configure your service worker to build hits using the Measurement Protocol, queue them if necessary, and send them directly to the Google Analytics servers.
We know how important page speed is to Google, but why is that, exactly? With increasing benefits to SEO, UX, and customer loyalty that inevitably translates to revenue, there are more reasons than ever to both focus on site speed and become adept at communicating its value to devs and stakeholders. In today’s Whiteboard Friday, Sam Marsden takes us point-by-point through how Google understands speed metrics, the best ways to access and visualize that data, and why it all matters.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Hi, Moz fans, and welcome to another Whiteboard Friday. My name is Sam Marsden, and I work as an SEO at web crawling platform DeepCrawl. Today we’re going to be talking about how Google understands speed and also how we can visualize some of the performance metrics that they provide to benefit things like SEO, to improve user experience, and to ultimately generate more revenue from your site.
Google & speed
Let’s start by taking a look at how Google actually understands speed. We all know that a faster site generally results in a better user experience. But Google hasn’t actually directly been incorporating that into their algorithms until recently. It wasn’t until the mobile speed update, back in July, that Google really started looking at speed. Now it’s likely only a secondary ranking signal now, because relevance is always going to be much more important than how quickly the page actually loads.
But the interesting thing with this update was that Google has actually confirmed some of the details about how they understand speed. We know that it’s a mix of lab and field data. They’re bringing in lab data from Lighthouse, from the Chrome dev tools and mixing that with data from anonymized Chrome users. So this is available in the Chrome User Experience Report, otherwise known as CrUX.
Now this is a publicly available database, and it includes five different metrics. You’ve got first paint, which is when anything loads on the page. You’ve then got first contentful paint, which is when some text or an image loads. Then you’ve DOM content loaded, which is, as the name suggests, once the DOM is loaded. You’ve also got onload, which is when any additional scripts have loaded. That’s kind of like the full page load. The fifth and final metric is first input delay, and that’s the time between when a user interacts with your site to when the server actually responds to that.
These are the metrics that make up the CrUX database, and you can actually access this CrUX data in a number of different ways.
Where is CrUX data?
1. PageSpeed Insights
The first and easiest way is to go to PageSpeed Insights. Now you just plug in whatever page you’re interested in, and it’s going to return some of the CrUX metrics along with Lighthouse and a bunch of recommendations about how you can actually improve the performance of your site. That’s really useful, but it just kind of provides a snapshot rather than it’s not really good for ongoing monitoring as such.
2. CrUX dashboard
Another way that you can access CrUX data is through the CrUX dashboard, and this provides all of the five different metrics from the CrUX database. What it does is it looks at the percentage of page loads, splitting them out into slow, average, and fast loads. This also trends it from month to month so you can see how you’re tracking, whether you’re getting better or worse over time. So that’s really good. But the problem with this is you can’t actually manipulate the visualization of that data all that much.
3. Accessing the raw data
To do that and get the most out of the CrUX database, you need to query the raw data. Because it’s a freely available database, you can query the database by creating a SQL query and then putting this into BigQuery and running it against the CrUX database. You can then export this into Google Sheets, and then that can be pulled into Data Studio and you can create all of these amazing graphs to visualize how speed is performing or the performance of your site over time.
It might sound like a bit of a complicated process, but there are a load of great guides out there. So you’ve got Paul Calvano, who has a number of video tutorials for getting started with this process. There’s also Rick Viscomi, who’s got a CrUX Cookbook, and what this is, is a number of templated SQL queries, where you just need to plug in the domains that you’re interested in and then you can put this straight into BigQuery.
Also, if you wanted to automate this process, rather than exporting it into Google Sheets, you could pull this into Google Cloud Storage and also update the SQL query so this pulls in on a monthly basis. That’s where you kind of want to get to with that.
Once you’ve got to this stage and you’re able to visualize the data, what should you actually do with it? Well, I’ve got a few different use cases here.
1. Get buy-in
The first is you can get buy-in from management, from clients, whoever you report into, for various optimization work. If you can show that you’re flagging behind competitors, for example, that might be a good basis for getting some optimization initiatives rolling. You can also use the Revenue Impact Calculator, which is a really simple kind of Google tool which allows you to put in some various details about your site and then it shows you how much more money you could be making if your site was X% faster.
2. Inform devs
Once you’ve got the buy-in, you can use the CrUX visualizations to inform developers. What you want to do here is show exactly the areas that your site is falling down. Where are these problem areas? It might be, for example, that first contentful paint is suffering. You can go to the developers and say, “Hey, look, we need to fix this.” If they come back and say, “Well, our independent tests show that the site is performing fine,” you can point to the fact that it’s from real users. This is how people are actually experiencing your site.
3. Communicate impact
Thirdly and finally, once you’ve got these optimization initiatives going, you can communicate the impacts that they’re actually having on performance and also business metrics. You could trend these various performance metrics from month to month and then overlay various business metrics. You might want to look at conversion rates. You might want to look at bounce rates, etc. and showing those side-by-side so that you can see whether they’re improving as the performance of the site is improving as well.
Faster site = better UX, better customer loyalty, and growing SEO benefit
These are different ways that you can visualize the CrUX database, and it’s really worthwhile, because if you have a faster site, then it’s going to result in better user experience. It’s going to result in better customer loyalty, because if you’re providing your users with a great experience, then they’re actually more likely to come back to you rather than going to one of your competitors.
There’s also a growing SEO benefit. We don’t know how Google is going to change their algorithms going forward, but I wouldn’t be surprised if speed is coming in more and more as a ranking signal.
This is how Google understands page speed, some ways that you can visualize the data from the CrUX database, and some of the reasons why you would want to do that.
I hope that’s been helpful. It’s been a pleasure doing this. Until the next time, thank you very much.
Thanks to a mutual handshake between Google, Microsoft, Yahoo, and Yandex, we have a library of fields we can use to highlight and more aptly define the information on web pages. By utilizing structured data, we provide search engines with more confidence (i.e. a better understanding of page content), as Alexis Sanders explains in this wonderful podcast. Doing so can have a number of positive effects, including eye-catching SERP displays and improved rankings.
If you’re an SEO, how confident are you in auditing or creating structured data markup using the Schema.org vocabulary? If you just shifted in your seat uncomfortably, then this is the guide for you. In it, I aim to demystify some of the syntax of JSON-LD as well as share useful tips on creating structured data for web pages.
Understanding the syntax of JSON-LD
While there’s a couple of different ways you can mark up on-page content, this guide will focus on the format Google prefers; JSON-LD. Additionally, we won’t get into all of its complexities, but rather, those instances most commonly encountered by and useful to SEOs.
The first thing you’ll notice after the opening <script> tag is an open curly brace. And, just before the closing </script> tag, a closed curly brace.
All of our structured data will live inside these two curly braces. As we build out our markup, we’re likely to see additional curly braces, and that’s where indentation really helps keep things from getting too confusing!
The next thing you’ll notice is quotation marks. Every time we call a Schema type, or a property, or fill in a field, we’ll wrap the information in quotation marks.
Next up are colons (no giggling). Basically, every time we call a type or a property, we then need to use a colon to continue entering information. It’s a field separator.
Commas are used to set the expectation that another value (i.e. more information) is coming.
Notice that after the informational field for the “logo” property is filled, there is no comma. That is because there is no additional information to be stated.
When we’ve called a property that includes two or more entries, we can use an open bracket and a closed bracket as an enclosure.
See how we’ve included Go Fish Digital’s Facebook and Twitter profiles within the “sameAs” property? Since there’s more than one entry, we enclose the two entries within brackets (I call this an array). If we only included the Facebook URL, we wouldn’t use brackets. We’d simply wrap the value (URL) in quotes.
Inner curly braces
Whenever we’ve called a property that has an expected “type,” we’ll use inner curly braces to enclose the information.
In the above image, the “contactPoint” property was called. This particular property has an expected type of “ContactPoint.” Isn’t that nice and confusing? We’ll go over that in more detail later, but for now just notice that after the “contactPoint” property is called, an inner curly brace was opened. On the very next line, you’ll see the ContactPoint type called. The properties within that type were stated (“telephone” and “contactType”), and then the inner curly braces were closed out.
There’s something else in this use case that, if you can understand now, will save you a lot of trouble in the future:
Look how there’s no comma after “customer service.” That’s because there is no more information to share within that set. But there is a comma after the closed inner curly brace, since there is more data to come (specifically, the “sameAs” property).
Creating structured data markup with an online generator
Now that we know a little bit about syntax, let’s start creating structured data markup.
Online generators are great if you’re a beginner or as a way to create baseline markup to build off of (and to save time). My favorite is the Schema markup generator from Merkle, and it’s the one I’ll be using for this portion of the guide.
Next, you’ll need to choose a page and a markup type. For this example, I’ve chosen https://gofishdigital.com/ as our page and Organization as our markup type.
After filling in some information, our tool has created some fantastic baseline markup for the home page:
Hopefully, after our lesson on syntax, you can read most (or all) of this example without a problem!
Creating custom structured data markup with a text editor
Baseline markup will do just fine, but we can go beyond the online generator presets, take full control, and write beautiful custom structured data for our page. On https://schema.org/Organization, you’ll see all the available properties that fall under the Organization markup type. That’s a lot more than the online tools offer, so let’s roll up our sleeves and get into some trouble!
Download a text editor
At this point, we have to put the training wheels away and leave the online tools behind (single tear). We need somewhere we can edit and create custom markup. I’m not going to be gentle about this — get a text editor NOW. It is well worth the money and will serve you far beyond structured data markup. I’ll be using my favorite text editor, Sublime Text 3.
I’ve gone ahead and pasted some baseline Organization markup from the generator into Sublime Text. Here’s what it looks like:
Adding properties: Easy mode
The page at https://schema.org/Organization has all the fields available to us for the Organization type. Our baseline markup doesn’t have email information, so I reviewed the Schema page and found this:
The first column shows that there is a property for email. Score! I’ll add a comma after our closing bracket to set up the expectation for more information, then I’ll add the “email” property:
The second column on Schema.org is the “expected type.” This time, it says “text,” which means we can simply type in the email address. Gosh, I love it when it’s easy.
Let’s keep pushing. I want to make sure our phone number is part of this markup, so let’s see if there’s a property for that…
Bingo. And the expected type is simply “text.” I’m going to add a comma after the “email” property and toss in “telephone.” No need to highlight anything in this example; I can tell you’re getting the hang of it.
Adding properties: Hard mode
Next, we’re going to add a property that’s a bit more complicated — the “address” property. Just like “email” and “telephone,” let’s track it on https://schema.org/Organization.
So, I do see “text,” but I also see an expected type of “PostalAddress.” The name of the game with data markup is: if you can be more specific, be more specific. Let’s click on “PostalAddress” and see what’s there.
I see a number of properties that require simple text values. Let’s choose some of these properties and add in our “address” markup!
Here are the steps I took to add this markup:
Placed a comma after the “telephone” property
Called the “address” property
Since the “address” property has an expected type, I opened inner curly braces
Called the “PostalAddress” type
Called the properties within the “PostalAddress” type
Closed out the inner curly braces
Can you spot all that from the image above? If so, then congratulations — you have completed Hard Mode!
Creating a complex array
In our discussion about brackets, I mentioned an array. Arrays can be used when a property (e.g. “sameAs”) has two or more entries.
That’s a great example of a simple array. But there will be times when we have to create complex arrays. For instance, Go Fish Digital has two different locations. How would we create an array for that?
It’s not all that complex if we break it down. After the North Carolina information, you’ll see a closed inner curly brace. I just entered a comma and then added the same type (PostalAddress) and properties for the Virginia location. Since two entries were made for the “address” property, I enclosed the entire thing in brackets.
Creating a node array using @graph
On April 16th, 2019, Joost de Valk from Yoast announced the arrival of Yoast SEO 11.0, which boasted new structured data markup capabilities. You can get an overview of the update in this post and from this video. However, I’d like to dive deeper into a particular technique that Yoast is utilizing to offer search engines fantastically informative, connected markup: creating a node array using @graph (*the crowd gasps).
The code opens with “@graph” and then an open bracket, which calls an array. This is the same technique used in the section above titled “Creating a Complex Array.” With the array now open, you’ll see a series of nodes (or, Schema types):
I’ve separated each (see below) so you can easily see how the array is organized. There are plenty of properties called within each node, but the real magic is with “@id.”
Under the WebSite node, they call “@id” and state the following URL: https://yoast.com/#website. Later, after they’ve established the WebPage node, they say the web page is part of the yoast.com website with the following line:
How awesome is that? They established information about the website and a specific web page, and then made a connection between the two.
Yoast does the same thing under the Article node. First, under WebPage, they call “@id” again and state the URL as https://yoast.com/wordpress-seo/#webpage. Then, under Article, they tell search engines that the article (or, blog post) is part of the web page with the following code:
“description”:”Joost de Valk is the founder and Chief Product Officer of Yoast and the Lead Marketing & Communication for WordPress.org. He’s a digital marketer, developer and an Open Source fanatic.”,
Click the error report and the tool will highlight the field after the error. As you’ll see, the missing comma after “[email protected]” has caused the tool to highlight “telephone.” The logic there is that without the comma, that line actually is the error. It makes logical sense, but can be confusing, so it’s worth pointing out.
Sublime Text’s “hidden” underscore feature
Validating structured data markup can be maddening, and every little trick helps. As your structured data gets more complicated, the number of sections and brackets and curly braces is likely to increase. Sublime Text has a feature you may not have noticed that can help you keep track of everything!
In the above image, I’ve placed my cursor on the first line associated with the “sameAs” property. Look closely and you’ll notice that Sublime Text has underscored the brackets associated with this grouping. If the cursor is placed anywhere inside the grouping, you’ll need those underscores.
I often use this feature to match up my brackets and/or curly braces to be sure I haven’t left any out or added in an extra.
Validating your structured data
Of course, the ultimate goal of all this error checking is to get your code to validate. The troubleshooting tips above will help you develop a bulletproof method of error checking, and that you’ll end up with the euphoric feeling that validated markup gives!
Using Google search for unique cases
The lessons and examples in this guide should provide a solid, versatile knowledge base for most SEOs to work with. But you may run into a situation that you’re unsure how to accommodate. In those cases, Google it. I learned a lot about JSON-LD structured data and the Schema vocabulary by studying use cases (some that only loosely fit my situation) and fiddling with the code. You’ll run into a lot of clever and unique nesting techniques that will really get your wheels spinning.
Structured data and the future of search
The rumblings are that structured data is only going to become more important in moving forward. It’s one of the ways Google gathers information about the web and the world in general. It’s in your best interest as an SEO to untie the knot of JSON-LD structured data and the Schema vocabulary, and I hope this guide has helped do that.