Google search update aims to show more diverse results from different domain names

Another Google search update has rolled out this week, this one deals with domain diversity in the search results.

TeamVFM Local Seo Antelope Valley Google search update aims to show more diverse results from different domain names

Google announced on the Search Liaison Twitter account just now that it has updated its search results to show a more diverse set of search results. That means Google will aim to show no more than two results from the same domain for a particular query in the top results.

More diverse Google results. Searchers, along with SEOs, have complained over the years that sometimes Google shows too many listings for the top search results from the same domain name. So if you do a search for a particular query, you may see 4 or 5 of the top ten results from the same domain name. Google is looking to not show more than two results from the same domain with this search update.

Google said “A new change now launching in Google Search is designed to provide more site diversity in our results.” Google added, “This site diversity change means that you usually won’t see more than two listings from the same site in our top results.”

But not always. Google said it does reserve the right to show more than two results from the same domain name when it thinks it is appropriate. “However, we may still show more than two in cases where our systems determine it’s especially relevant to do so for a particular search,” Google wrote. I suspect this is related to branded queries, so if you are searching for a brand, like Amazon, you likely will see more than just two results from listed in the search results.

Sub-domains. Google will generally treat sub-domains as part of the main domain. So if you have, it will be considered part of the main domain and count towards the two results. Google said “Site diversity will generally treat subdomains as part of a root domain. IE: listings from subdomains and the root domain will all be considered from the same single site.”

Of course, Google reserves the right to treat some subdomains differently, “However, subdomains are treated as separate sites for diversity purposes when deemed relevant to do so,” Google wrote.

Core results only. This only impacts the core results, not the additional search features such as top stories, video snippets, image carousels or other vertical search features listed among the other web results.

Danny Sullivan from Google added on Twitter, “It’s about the main listings, not various other displays on the search results.”

Yes. This is only about the main web search listings. It not including things like featured snippets, map listings, etc.

Unrelated to the core update. Google clarified that this search update is unrelated to the June 2019 core update that began rolling out Monday. “Finally, the site diversity launch is separate from the June 2019 Core Update that began this week. These are two different, unconnected releases,” Google said.

But it started rolling out two days ago and is fully live today Sullivan told us. “It started a little bit about two days ago but went fully live today,” he said.

So technically, your analytics and Search Console data can be impacted by both the June 2019 core update and this domain diversity update. How do you know which one impacted you?

However, Danny Sullivan thinks they are far enough apart that we should be able to distinguish between the two updates:

but yet you announced it so this change will be “noticeable” and thus impact analytics and search console data, right? it would be nice if you would have held off a week to let the June update roll out more before doing this, right? feedback for the team.

We launch things almost every day. Sometimes several in a single day. This is far enough out from the core update release that any stat changes can probably be distinguished.

Not an update. Google is saying this is not really an update and won’t have as much as of impact on your site. Danny Sullivan from Google added, “Personally, I wouldn’t think of it like an update, however. It’s not really about ranking. Things that ranked highly before still should. We just don’t show as many other pages.” Whatever you want to call it, it changed how some URLs are shown in the search results.

It’s not perfect. Yes, you will still find examples of Google showing more than two results from a single domain for a search result set. Google said “It’s not going to be perfect. As with any of our releases, we’ll keep working to improve it,” when they were given an example of a result set that showed too many results:

It’s not going to be perfect. As with any of our releases, we’ll keep working to improve it. You might also try it the way someone in Tustin would do it — “nail salons” or “nail salons near me” or “nail salons tustin.” If you’re in Tustin, you know you’re in CA 🙂

The announcement.

Have you ever done a search and gotten many listings all from the same site in the top results? We’ve heard your feedback about this and wanting more variety. A new change now launching in Google Search is designed to provide more site diversity in our results….

This site diversity change means that you usually won’t see more than two listings from the same site in our top results. However, we may still show more than two in cases where our systems determine it’s especially relevant to do so for a particular search….

Site diversity will generally treat subdomains as part of a root domain. IE: listings from subdomains and the root domain will all be considered from the same single site. However, subdomains are treated as separate sites for diversity purposes when deemed relevant to do so….

Finally, the site diversity launch is separate from the June 2019 Core Update that began this week. These are two different, unconnected releases.

History. Google has updated how the domain diversity works in Google search many times over the years. In 2010, it said it “launched a change to our ranking algorithm that will make it much easier for users to find a large number of results from a single site.” In 2012, the pendulum began to swing back to more domain diversity in search results. And again in 2013, Google said it would show fewer results from the same domain name. Google has probably made numerous changes to domain diversity in search many more times, we just didn’t have a confirmation from Google all of the time.

Why we should care. This can impact those who aim to try to get their domains to dominate for specific queries. This is more often seen in the reputation management industry but can also related to other areas of search. If you do have sites that have two or more pages that rank for the same query in Google, you will want to track and see how this Google update impacts those sites.


 on June 6, 2019

As a content marketer, I believe that the best way to improve domain authority—and therefore search engine ranking—is to earn a large number of links from high-authority publishers. To do this successfully, you have to produce top-quality, relevant and interesting content that journalists will want to write about.

TeamVFM Local Seo How to Leverage Natural Syndication Networks to Earn More Links

In my last post for SEMrush, I talked about how digital PR specialists can improve their outreach emails, citing an internal analysis of my team’s successful pitches as evidence of what works. I talked through a few major points: subject lines, pitch length, closing, and follow up best practices.

If you read that post, you already know how to write a successful email pitch. In this post, we are going to cover who to pitch. That is, which publishers deliver the most bang for your buck (or, the most ROI on your pitch) when it comes to content marketing outreach.

In 2018, our media relations team earned 41,545 press mentions for our client content marketing campaigns. These press mentions appeared on a variety of top-tier, national, international, and local websites across all verticals. Note: 1,135 of those press mentions were the result of direct outreach to a journalist by one of our media relations specialists.

So, where did the other ~40,000 stories come from? Two words: natural syndication.

What Are Publisher Syndication Networks?

My first encounter with a publisher syndication network occurred over three years ago when I earned my first placement on USA Today; a great placement for someone new to media relations. After the initial excitement of seeing the story I pitched come to life wore off, it was business as usual. I continued pitching the content to other publishers. As I was list-building, however, I came across the same USA Today story—by the same author—published on multiple websites with unique domains. How could this be?

USA Today is owned by Gannett Company, an entity that owns over 100 daily newspapers as well as close to 1,000 weekly newspapers. While the article didn’t syndicate to all of Gannet’s properties, it did get published on about 20 separate online newspapers.

So, a single email pitch earned placements on 20 unique domains with distinctive audiences? That is some high ROI if you ask me.

USA Today isn’t alone in this effect. Other publishers act as hubs, or influencers, publishing articles that are either automatically syndicated (like in USA Today’s case) or organically “picked up” by other journalists looking for reputable stories to feature within their beat.

So, which publishers act as the biggest influencers of content distribution?

After three years of pitching content to nearly every online publisher in the United States, I have developed a deep intuition of which domains have the most syndication potential for our clients’ content. But my team wanted to learn more.

Using SEMrush, Fractl co-founder, Kristin Tynski, took a look at the link networks of the top 400 most trafficked American publishers online. She then used Gephi, a powerful network visualization tool to make sense of this enormous web of links. Essentially, the visualization below shows the relationships each unique publisher has with each other.

Publisher Syndication NetworksClick the picture to interact with the visualization

Tools like SEMrush and Gephi allow us to more deeply understand how online news publications and influential niche blogs interact with one another. We were able to gain insight into how content is distributed and syndicated through link networks.

There are some immediate relationships we can recognize just from clicking around on the visualization:

  • Nodes clustered around each other are publishers that link to each other often. A good example is,, and They are all owned by the same proprietor. The closeness of these nodes is the result of heavy interlinking and story syndication.
  • Some news publishers grouped near other news publishers have similar political leanings. Liberal-leaning publishers Politico, Salon, The Atlantic, and Washington Post are all grouped together, while more conservative publishers Breitbart, The Daily Caller, and BizPac Review are grouped.

Why Understanding Syndication Networks Will Vastly Increase Campaign Success

If you had the choice of sending a pitch and earning a placement or sending a pitch and earning 20 placements, which would you choose?

When you start to understand which news outlets have the largest syndication networks, you are empowered to prioritize those high-syndication publications over publications with lower reach.

After all, not all placements are created equally.

It all comes down to list-building. While both a travel writer at PopSugar and a travel writer at Reuters might enjoy your data-driven travel content, the Reuters writer has a wider influence, plain and simple.

As a result, the content you are pitching will earn significantly more widespread link pickups.

By using the visualization above, you can clearly identify the top publishers for list-building. They are not surprising: CNN, The New York Times, BBC, & Reuters are immediately obvious. The New York Times enjoys earning the most Pulitzer prizes of all time—and that should give you an indication of why they are a trusted source for other journalists to find authoritative story ideas.

If your content gets picked up by any of these sites, it is almost guaranteed that you will earn dozens—if not hundreds—of press mentions from other websites without any additional outreach on your part.

How to Leverage Natural Syndication Networks to Expand Your Reach

Step 1: Create Newsworthy, Relevant, Unique, and Share-worthy Content

There is a reason that earning a placement on the New York Times is such a big deal—it is notoriously difficult to do. If you are trying to organically earn exposure on influential content hubs like the NYT, the Washington Post, or CNN, you can’t start with a drab piece of content.

So what makes newsworthy, relevant, unique, and share-worthy content? According to SEMrush, “quality content relies on precise analytics and trustworthy data. When content is not supported by data, one can never be sure that it will directly hit the audience’s pain points.

After years of experience, I confidently agree that “data-driven content marketing is what produces the most high-quality backlinks we receive for our clients. Here are some ideas for finding the data sources that can become the basis of your content strategy:

Once you have your data, learn what makes content worthy of high-authority placements by including these three characteristics of high-quality content.

Step 2: Identify High-Authority Publishers with Large Natural Syndication Networks

Build out a list of dream publishers for your content.

When evaluating whether to select a publisher or a journalist for outreach, there are four main qualifiers you should be thinking about.

1. Topical Relevance

Identify a few choice writers or editors that cover the specific beat that your content is relevant to.

It might seem obvious, but this key qualifier is often neglected by PR practitioners that care more about pitch volume than their reputation. Irrelevant pitches are very high on the journalist pet peeves list and will land your email in the trash folder 99% of the time.


2. Domain Authority

Domain Authority (DA) is “a search engine ranking score developed by Moz that predicts how well a website will rank on search engine result pages (SERPs). A Domain Authority score ranges from one to 100, with higher scores corresponding to a greater ability to rank.” — Moz

Domain Authority is a third-party metric developed by Moz to help marketers identify the strength of their website. While Google isn’t completely transparent about their ranking algorithm, DA is a scoring system that you could use to compare websites against each other. Domain Authority is a good qualifying metric to use when building an outreach list. Generally, the higher the DA of a website, the wider their audience, and the more likely that they have broad natural syndication networks. 

A good rule of thumb is to select publishers for your outreach list with a Domain Authority of 60 or higher, depending on the content you’re promoting. Moz created a free Google Chrome extension called the MozBar that you can use to check the DA of the news publishers on your list.

3. Social Engagement

The goal of any good outreach activity should be to get whatever you are pitching in front of as many eyes as possible, within your target audience. For example, if you are representing a fitness brand and promoting jogging-related content, who would be better to pitch than a health and fitness journalist at a lifestyle publication with an active social media presence and a broad reach? And, how might you find this journalist?

Buzzsumo is one tool that every marketer should have in their arsenal — I use every day to help find the most powerful influencers within any given niche. Here’s how a quick Buzzsumo search can help you identify who to pitch in a matter of minutes.


First, head over to and select the Influencers tab at the top. Then, type in your search query.

For this example, I wrote “fitness writer”. Buzzsumo allows you to filter results, and for this query, I selected Bloggers, Influencers, Companies, and Journalists (I deselected “Regular people”). I also selected “Active Influencers” to ensure that this person actively engages on social media and “Verified Influencers” to weed out unverified Twitter users. After you have typed in your query and selected your filters, click “Search”.

Here is one of the first results I see from my “fitness writer” query:


When you check out Amanda Loudin’s Twitter profile, it is plain to see that she would be an excellent outreach target for a running campaign. Not only is her featured image a picture of her running, but her Twitter bio clearly shows her authority as a fitness writer for the Washington Post, Outside Online, Runner’s World, and ESPN Woman. She also has 11,000 followers and has tweeted in the last 3 hours. Who better to pitch than her for your jogging campaign? It is that easy.


4. The Potential for a Broad-Reaching Syndication Network

Unfortunately, not every target on your list can have broad-sweeping syndication networks. But you can be sure to at least include a few targets that have the potential to naturally syndicate content across the Internet.

By using the one-of-a-kind interactive visualization our co-founder created using Gephi and SEMrush, you can easily identify top publishers with massive link networks to include on your outreach list.

To figure out which sites enjoy the most links from the widest variety of sites, look to the most central nodes on the visualization. You will immediately identify Reuters, CNN, and the NYTimes are located at the center, with large volumes of links incoming from all over; this basically means that these sites get linked to the most often from other sites, often as sources. If CNN covers something, other journalists might pick up the same story and write it for their own publication.

You also may realize that the tighter the cluster of nodes are together, the more interlinking happens between them. Publishers that appear closer together are often either owned by the same company (like Gannett) or have built-in automatic link syndication relationships. A good example is the Gizmodo Media Group. Gizmodo owns Deadspin, Gizmodo, Jalopnik, Jezebel, Kotaku, and Lifehacker. The closeness of nodes in this group is the result of heavy interlinking and story syndication.

After you have identified which publishers with large linking networks you would like to target, you can search for individual journalists at those publications using Google search operators. For example, say I want to include CNN on my outreach list for the jogging content. I might enter “ exercise” and filter results from the last year.

On the first page of results, this article comes up: “Three benefits of (and three precautions about) outdoor winter exercise”. The author of the article, Dr. Melina Jampolis, writes a column for CNN and might be a good fit for your data-driven jogging content. A placement with CNN could be the first of many as a result of CNN’s broad natural syndication network.

Step 3: Send a Killer Pitch

If you have great content, and you have built a list of influential journalists and publishers in your topic vertical, the only thing standing between you and a massive, diverse backlink portfolio is your pitch.

Between the subject line, the introduction, the pitch body and the close, that is a lot of pressure on a single email! Learn how to optimize your media pitch in my previous SEMrush post.

Not All Placements Are Considered Equal

Taking advantage of natural publisher syndication networks can mean the difference between generating less than a handful of links or hundreds of press mentions for your content.

I encourage you to test this strategy out with your next content marketing campaign. If you do, let me know how it went in the comments below!

The link graphs of news syndication networks were built using backlink exports from SEMrush. The visualization we created is free for you to use to optimize your own content outreach practices.


Pagination; You’re Doing it Wrong!

There seems to still be a lot of confusion surrounding the issue of pagination — what it is, what its real purpose is, when it should be employed, how to implement it properly, should it be combined with a canonical and more.

Hopefully, this series of articles will provide you with a better understanding of how pagination can help you rank the pages that are important to you. Most importantly, it should help you avoid some of the most common errors that can make your pagination work against you.

TeamVFM Local Seo Pagination; You’re Doing it Wrong!In this article we will cover:

  • What is pagination
  • Why do we need pagination
  • Why is pagination important for SEO
  • Common SEO issues resulting from improperly handled pagination
  • How to diagnose common pagination issues
  • How to properly implement pagination attributes

But before we start, it’s important to remember that pagination is only a suggestion to Google that you prefer the sequential pages to be consolidated as contextually related. Using rel=“prev” and rel=“next” attributes tells the search engines that a page has a topical tie with adjacent pages in the series. That may be just two pages, or it may be two thousand – it works the same. When implemented properly, Google will normally heed that request, but conflicting signals can cause the search engine to ignore your pagination, which will often be to the detriment of your rankings.

What is pagination?

Pagination is simply a manner of ordering sequential pages which are contextually connected, to provide continuity to both users and search engines. It is accomplished by placing rel=“prev” and rel=“next” attributes in the head of each page in the series.

There are two common instances in which pagination should be implemented. The first is for paginated posts or articles, where you have a long document which you prefer to break into multiple pages. More often, however, it is utilized in paginating archives, such as are encountered with product descriptions on an eCommerce site which include various sizes, colors or models. This image from Google’s Webmaster’s Blog illustrates both:

Article series suitable for rel next and prev

Why do we need pagination?

There are a number of reasons to employ pagination. For one thing, it is a method of telling the search engines which content should be considered to be part of a series or set, in order for them to assign indexing properties to the entire series, rather than to just one page.

By tying similar content together, it also shows the search engine how much content on your website is relevant to a particular topic; this can serve to help your site stand out above competing websites.

Breaking expansive pages up into smaller chunks also makes it easier for users to digest the content on the page. Users can also find pagination useful in facilitating navigation through extensive lists.

Why is pagination important for SEO?

Pagination can affect the SEO efforts on your website in two different ways. The first is for the purposes of ranking and internal flow of link equity. Proper pagination informs the search engine that link equity should be distributed across the entire paginated document, rather than to just one page. Obviously, sending the wrong signal here could seriously affect the distribution of that equity.

There can be other SEO impacts, as well; if the search engine fails to consider the series of pages as a single document, the amount of content that would be considered relevant to the targeted topic could be greatly diminished.

It is also worth remembering that Google is critical of any practices which aren’t user-friendly, such as an awkward site architecture or clumsy navigation. The process of setting up proper pagination will often lead you to the discovery and correction of such issues, which can only help.

Common SEO Issues Resulting From Improperly Handled Pagination

One common indicator of incorrect pagination will be index bloat — this is not as great of a problem with small to medium-sized sites; it is most often a problem with large eCommerce sites which include a lot of products with different varieties, such as color or size.

When not properly paginated, every page of a series may be indexed as a separate document, which consumes crawl budgets (on larger sites), as well as making pages compete with one another for SERP placement.

Unimportant pages, such as tag pages, can also create index bloat, again consuming the crawl bots’ time on your site which would be better spent indexing your important pages.

While there is no such thing as a “duplicate content penalty”, duplicate content can still have a detrimental effect on your SEO efforts. If not adequately addressed, it can dilute the focus of your site’s content, thus making it much more difficult to rank for target terms.

As stated earlier, along with topical dilution, a failure to paginate correctly can also cause significant dilution of link equity.

How to Diagnose Common Pagination Issues – Part 1

There are several things to look for when trying to determine if a site has properly implemented the most appropriate pagination. What is advisable in each circumstance will depend upon how the content is structured, both contextually and physically.

Is there a View All version?

The first thing I check on archive pages is whether there is a “View All” version which allows users to view the entire document, in addition to the individual pages of the series. I check that first because that will impact how canonicals should be implemented in that series.

Without a “View All” version, each page of the series should self-canonicalize. But with a “View All” page, every page of the series should canonicalize to the “View All” page, as illustrated in this example from the Google Webmaster Blog.

Pagination article with a view-all page

When Google detects the presence of a view-all page, they normally try to show that page in the SERPs, as well as consolidating indexing properties, like links, to the “View All” page. It is still a good idea to canonicalize to the “View All” page, though, even though Google tries to consolidate to it (if they detect it without a canonical tag). When there is no “View All” page, they will often display the first page of the series when they detect paginated content.

Is rel=prev/next properly implemented?

The only difference in the pagination structure when a “View All” page is provided is that in addition to the rel=“prev” and rel=“next”, there should be a link rel=“canonical” tag pointing to the “View All” page of the series.

The first page of the series should have just the rel=“next” attribute, pointing to the second page.

The last page should have only rel=“prev”, pointed at the previous page. All the rest of the pages in the series should have both, always pointed at the adjacent previous and next pages.

How to Properly Implement Pagination Attributes

Some rules to always adhere to:

  • When breaking an article, post or another document into a series of pages, tie them together with rel=“prev” and rel=“next” to let the search engine see the connection. The first page should have just the rel=“next” attribute, the last page should have only rel=“prev”, and the rest of the pages in the series should have both, always pointed at the adjacent pages.
  • Always self-canonicalize all pages in a series (unless there is a “View All” page, in which case all pages should canonicalize to the “View All” page).
  • Avoid the use of a “View All” page when the page would be too long to be easily used, or if the page would take too long to load.

Simple Rel Prev/Next Implementation

Page 1

<link rel="next" href="" />

Page 2

<link rel="prev" href="" />
<link rel="next" href="" />

Page 3

<link rel="prev" href="" />
<link rel="next" href="" />

Page 4

<link rel="prev" href="" />
<link rel="next" href=" /" />

Page 5

<link rel="prev" href=" /" />

Rel prev/next implementation without a View All page

Page 1

<link rel="next" href="" />
<link rel="canonical" href="" />

Page 2

<link rel="prev" href="" />
<link rel="next" href="" />
<link rel="canonical" href="" />

Page 3

<link rel="prev" href="" />
<link rel="next" href="" />
<link rel="canonical" href="" />

Page 4

<link rel="prev" href="" />
<link rel="next" href="" />
<link rel="canonical" href="" />

Page 5

<link rel="prev" href="" />
<link rel="canonical" href="" />

Rel prev/next implementation with a View All page

Page 1

<link rel="next" href="" />
<link rel="canonical" href="" />

Page 2

<link rel="prev" href="" />
<link rel="next" href="" />
<link rel="canonical" href="" />

Page 3

<link rel="prev" href="" />
<link rel="next" href="" />
<link rel="canonical" href="" /> 

Page 4

<link rel="prev" href="" />
<link rel="next" href="" />
<link rel="canonical" href="" />

Page 5

<link rel="prev" href="" />
<link rel="canonical" href="" />


It is all too easy to forget to open or close quotation marks or use a – instead of an = sign. So when you are through setting up your pagination, here is a great tool with which to check it, to be certain each page is pointing to the correct previous and next page, as well as where the canonical is pointed:

(Max Prin and Alexis Sanders have a number of handy free tools there, so you may want to bookmark that site)

What’s Next?

Parts 2 and 3 of this series of articles will dive further into pagination to cover robots instructions, javascript, infinite scroll, and how confusing canonical and robot instructions can damage your rankings. Stay tuned!