Similar URLs may flag as duplicate content in Google

Posted on March 12, 2021

 

In a recent Google Search Central SEO hangout, Google’s John Mueller revealed that if websites have pages with similar URLs, the search engine could end up flagging them as duplicate content – even if the content is unique on each page.

Mueller explained that the search engine uses multiple methods to determine whether a web page has duplicate content. The method that is more widely known is the crawling and indexing method, whereby Google takes a direct look at the content on a specific page. However, with this being a time-consuming process, Mueller explained that in order to conserve resources, Google also uses another method, in which it predicts whether a page may contain duplicated content based on its URL.

The URL method works by looking at patterns Google has established in the past from websites with similar URLs that do contain duplicate content.

As an example, let’s say you’re a plumber based in Ellesmere Port. You don’t just cover Ellesmere Port, though – you cover surrounding areas like Chester and Bromborough, and so you have multiple pages on your site explaining your plumbing services for each location, with the same URL apart from the location name.

Now, you know that Google doesn’t like duplicate content, so you’ve made sure that each and every page on your site has been written from scratch and is entirely unique. However, other plumbing businesses and tradesmen in the area haven’t – on their websites, they’ve used exactly the same content on each page, and have only changed the location name. Google has recognised this, and has flagged up these other sites as having duplicate pages with similar looking URLs to your own site.

As a result, through no fault of your own, when it comes to assessing your website, the search engine looks at your URLs, where only the location is different, and thinks ‘this is exactly like those other sites – this site must have duplicate content too!’

Mueller explained that this is especially common with URLs that contain location names, saying:

“Essentially our systems recognize that what you specify as a city name is something that is not so relevant for the actual URLs. And usually we learn that kind of pattern when a site provides a lot of the same content with alternate names.”

How can you avoid this?

Mueller suggested multiple ways that website owners can try to prevent their pages from flagging up as duplicates in this way. The first tip – and perhaps the most obvious – is to avoid any overlaps in the content on your site.

If there are instances where you can’t avoid any overlaps, he suggests using a canonical tag – a piece of code that goes on your website that tells search engines that one page is the “master” page, and should appear in search results, and that the other pages that contain the same content are not to appear in results, thus getting around the issue of duplicate content.

Of course, the ideal scenario for every website is to have each page indexed by Google, and for all content to be original, with no chance of duplicates. To have quality, unique content for your site written, edited and uploaded for you, reach out to our team at Engage Web.

Digital Marketing Executive at Engage Web
Emily is no stranger to the world of online content. By the age of just 14, a novel she wrote on the story-writing website Wattpad had amassed more than a million views!

Call Now Button
>

Who Engage Web has helped:

Ice Lolly Minuteman Press BUNZLGS1 UK The Underfloor Heating Store West Cheshire Athletic Club Thomas Cook MWB Business ExchangeWeb Media 360 D2 Architects Beacon Financial Training Steely ProductsBurlydam Garden Centre Asentiv BodyHQ Clever Vine Endeavour Mortgages Pro Networks Comm-Tech Wickers World Ascot Mortgages Top Teks
TEL: 0345 621 4321