Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Paginated Pages Which Shouldnt' Exist..
-
Hi
I have paginated pages on a crawl which shouldn't be paginated:
https://www.key.co.uk/en/key/chairs
My crawl shows:
<colgroup><col width="377"></colgroup>
| https://www.key.co.uk/en/key/chairs?page=2 |
| https://www.key.co.uk/en/key/chairs?page=3 |
| https://www.key.co.uk/en/key/chairs?page=4 |
| https://www.key.co.uk/en/key/chairs?page=5 |
| https://www.key.co.uk/en/key/chairs?page=6 |
| https://www.key.co.uk/en/key/chairs?page=7 |
| https://www.key.co.uk/en/key/chairs?page=8 |
| https://www.key.co.uk/en/key/chairs?page=9 |
| https://www.key.co.uk/en/key/chairs?page=10 |
| https://www.key.co.uk/en/key/chairs?page=11 |
| https://www.key.co.uk/en/key/chairs?page=12 |
| https://www.key.co.uk/en/key/chairs?page=13 |
| https://www.key.co.uk/en/key/chairs?page=14 |
| https://www.key.co.uk/en/key/chairs?page=15 |
| https://www.key.co.uk/en/key/chairs?page=16 |
| https://www.key.co.uk/en/key/chairs?page=17 |Where is this coming from?
Thank you
-
You will also have to get those URLs out of the index once you fix the rel next/prev issue. In order to do that effectively, they should return a 404 or 410 status code in the HTTP header so Google knows that they no longer exist (even though they never really did in the first place). Otherwise, it's what is known as a "soft 404" in which the page doesn't really exist, but returns a 200 (OK) status code, which is confusing to Google if you don't want them indexed.
-
Hi Becky
I can see chairs:
https://www.key.co.uk/en/key/chairs
But the paginated versions above are not in there. (can you see them?)
All you need to do is remove this directive for pages without a page 2: rel="next" href="https://www.key.co.uk/en/key/chairs?page=2" > as there is no page 2 for chairs.
Regards
Nigel
-
Hi Nigel
Thanks for jumping in. I'm confused as I have found the pages on my screaming frog crawl?
This page https://www.key.co.uk/en/key/chairs shouldn't have any pagination as there are no additional pages, but there is rel=next in the source code...
Now I'm a bit confused!
Becky
-
Yes I've just gone through every top level page too & pagination is awful, so I'm compiling a list and a case to push it.
It's pretty bad across the site, so I'll push for this to be updated. I find new issues with it all the time..
Thanks for your help!
-
Yes exactly. Even though the pages don't exist to the user, they still technically exist. If I were you, I'd take a very deep look at pagination on your site. If this is happening at scale, then fixing it could be a major improvement to your site. I took a look and it seems to be happening on all your top-level category pages like Chairs, Office Furniture, Shelving & Racking, etc.
These paginated pages are essentially a bunch of duplicate pages of your main category pages, each with a self-referencing canonical (which is the proper way to set up pagination). So Google could be extremely confused about which one to rank. In most cases, Google will rank page 1 because the use of rel="next"/rel="prev" is essentially telling Google that page 1 is the canonical version. However, you're still opening yourself up to the possibility of Google crawling all of these duplicate pages which is a huge waste on your crawl budget.
Hope that helps!
-
Hi
Thank you both.
We do have issues with our pagination which I've raised with developers, but it's taking forever to sort out. I'll flag this as well.
So even though the content on the paginated pages for Chairs doesn't exist we still need to remove the tags on these - https://www.key.co.uk/en/key/chairs?page=10
-
If you view your source code, you'll notice you are actually using rel="next" and rel="prev" on the main category page (https://www.key.co.uk/en/key/chairs). This is why you (and most likely Googlebot as well) are crawling these paginated pages. Even though you don't have links to the paginated pages on the main category page, they still exist and you're giving crawlers the directive (rel next / rel prev) to crawl them.
If you remove rel="next" on the category home page, that should help but you should really remove rel="next" and rel="prev" on the paginated pages as well. Unless you do that, Google will still find them and crawl them because they're aware these pages exist and they're likely indexed.
Here's a great resource on understanding pagination as well as the correct use of rel="next" and rel="prev" from Maile Ohye at Google: https://www.youtube.com/watch?v=njn8uXTWiGg
Hope this helps!
Cheers!
-Tyler -
Nice website by the way. It looks very professional. And your 49 DA is very impressive.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Images on their own page?
Hi Mozers, We have images on their own separate pages that are then pulled onto content pages. Should the standalone pages be indexable? On the one hand, it seems good to have an image on it's own page, with it's own title. On the other hand, it may be better SEO for crawler to find the image on a content page dedicated to that topic. Unsure. Would appreciate any guidance! Yael
Intermediate & Advanced SEO | | yaelslater1 -
Readd/Reindex a page that was 410'd
A script of ours had an error that caused some pages we didn't wish 410'd to be 410'd, we caught it in about 12 hours but for some pages it was too late. My question is, will those pages be reindexed again and how will that affect their page ranking will they eventually be back where they were? Would submitting a site map with them help, or what would be the best way to correct this error (submit the links to google indexer maybe?).
Intermediate & Advanced SEO | | Wana-Ryd0 -
URL structure - Page Path vs No Page Path
We are currently re building our URL structure for eccomerce websites. We have seen a lot of site removing the page path on product pages e.g. https://www.theiconic.co.nz/liberty-beach-blossom-shirt-680193.html versus what would normally be https://www.theiconic.co.nz/womens-clothing-tops/liberty-beach-blossom-shirt-680193.html Should we be removing the site page path for a product page to keep the url shorter or should we keep it? I can see that we would loose the hierarchy juice to a product page but not sure what is the right thing to do.
Intermediate & Advanced SEO | | Ashcastle0 -
Changed all external links to 'NoFollow' to fix manual action penalty. How do we get back?
I have a blog that received a Webmaster Tools message about a guidelines violation because of "unnatural outbound links" back in August. We added a plugin to make all external links 'NoFollow' links and Google removed the penalty fairly quickly. My question, how do we start changing links to 'follow' again? Or at least being able to add 'follow' links in posts going forward? I'm confused by the penalty because the blog has literally never done anything SEO-related, they have done everything via social and email. I only started working with them recently to help with their organic presence. We don't want them to hurt themselves at all, but 'follow' links are more NATURAL than having everything as 'NoFollow' links, and it helps with their own SEO by having clean external 'follow' links. Not sure if there is a perfect answer to this question because it is Google we're dealing with here, but I'm hoping someone else has some tips that I may not have thought about. Thanks!
Intermediate & Advanced SEO | | HashtagJeff0 -
Substantial difference between Number of Indexed Pages and Sitemap Pages
Hey there, I am doing a website audit at the moment. I've notices substantial differences in the number of pages indexed (search console), the number of pages in the sitemap and the number I am getting when I crawl the page with screamingfrog (see below). Would those discrepancies concern you? The website and its rankings seems fine otherwise. Total indexed: 2,360 (Search Consule)
Intermediate & Advanced SEO | | Online-Marketing-Guy
About 2,920 results (Google search "site:example.com")
Sitemap: 1,229 URLs
Screemingfrog Spider: 1,352 URLs Cheers,
Jochen0 -
Help! The website ranks fine but one of my web pages simply won't rank on Google!!!
One of our web pages will not rank on Google. The website as a whole ranks fine except just one section...We have tested and it looks fine...Google can crawl the page no problem. There are no spurious redirects in place. The content is fine. There is no duplicate page content issue. The page has a dozen product images (photos) but the load time of the page is absolutely fine. We have the submitted the page via webmaster and its fine. It gets listed but then a few hours later disappears!!! The site has not been penalised as we get good rankings with other pages. Can anyone help? Know about this problem?
Intermediate & Advanced SEO | | CayenneRed890 -
Do 404s really 'lose' link juice?
It doesn't make sense to me that a 404 causes a loss in link juice, although that is what I've read. What if you have a page that is legitimate -- think of a merchant oriented page where you sell an item for a given merchant --, and then the merchant closes his doors. It makes little sense 5 years later to still have their merchant page so why would removing them from your site in any way hurt your site? I could redirect forever but that makes little sense. What makes sense to me is keeping the page for a while with an explanation and options for 'similar' products, and then eventually putting in a 404. I would think the eventual dropping out of the index actually REDUCES the overall link juice (ie less pages), so there is no harm in using a 404 in this way. It also is a way to avoid the site just getting bigger and bigger and having more and more 'bad' user experiences over time. Am I looking at it wrong? ps I've included this in 'link building' because it is related in a sense -- link 'paring'.
Intermediate & Advanced SEO | | friendoffood0 -
Culling 99% of a website's pages. Will this cause irreparable damage?
I have a large travel site that has over 140,000 pages. The problem I have is that the majority of pages are filled with dupe content. When Panda came in, our rankings were obliterated, so I am trying to isolate the unique content on the site and go forward with that. The problem is, the site has been going for over 10 years, with every man and his dog copying content from it. It seems that our travel guides have been largely left untouched and are the only unique content that I can find. We have 1000 travel guides in total. My first question is, would reducing 140,000 pages to just 1,000 ruin the site's authority in any way? The site does use internal linking within these pages, so culling them will remove thousands of internal links throughout the site. Also, am I right in saying that the link juice should now move to the more important pages with unique content, if redirects are set up correctly? And finally, how would you go about redirecting all theses pages? I will be culling a huge amount of hotel pages, would you consider redirecting all of these to the generic hotels page of the site? Thanks for your time, I know this is quite a long one, Nick
Intermediate & Advanced SEO | | Townpages0