Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Does omitted results shown by Google always mean that website has duplicate content?
-
Google search results for a particular query was appearing in top 10 results but now the page appears but only after clicking on the " omitted results by google."
My website lists different businesses in a particular locality and sometimes results for different localities are same because we show results from nearby area if number of businesses in that locality (search by users) are less then 15.
Will this be considered as "duplicate content"? If yes then what steps can be taken to resolve this issue?
-
It might go to supplemental index when:
- Content is not unique.
- No content at all or with very little content.
- You have pages, not determined to have content initially, such as sitemap, contact, Terms and Conditions, etc.
- Pages that don’t have titles/meta descriptions or have duplicate ones.
-
Hi Prashant,
Yes - any URLs that are different are different in Google's eyes, unless the modifier is a # symbol.
So if you have www.example.com/key#value12345 and www.example.com/key#valuexyzabc, then Google sees these as the same, i.e. www.example.com/key. They will ignore everything after the # character.
All other query strings, etc., mean that the URL has changed and if the pages on those URLs are the same, it's duplicate content.
I hope this helps.
Cheers,
Jane
-
Thanks Jane,
Will the following urls will be considered as two different urls?
1. www.example.com/key=value1& key2=value2
2. www.example.com/key2=value2 & key=value1
-
Thanks David,
I found that a few of these urls were not crawled by Googlebot for a month or so. Now when i checked the last crawled status using "cache:", i found out that these pages were crawled again only recently and probably that is why it is back in top 10 results (main index).
I have one question: When does an url go into "Supplemental Index" ?
-
Hi Prashant,
This sounds like removal due to duplication rather than DMCA - the omission is usually noted as being because of DMCA notices if they are the reason, e.g. http://img.labnol.org/images/2008/07/googlesearchdmcacomplaint.png
Google likely sees these as duplicates, or near-dupes, as David has said,
-
Digital Millennium Copyright Act being used here? No.
OP, it does sound like you have duplicate content issues. See what you can do to make those omitted pages more unique.
-
It's most likely because some one would have put a DMCA takedown on that Google search result. jump in to your Google WebMasterTools account and you should see some notification from Google about it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does Google ignores page title suffix?
Hi all, It's a common practice giving the "brand name" or "brand name & primary keyword" as suffix on EVERY page title. Well then it's just we are giving "primary keyword" across all pages and we expect "homepage" to rank better for that "primary keyword". Still Google ranks the pages accordingly? How Google handles it? The default suffix with primary keyword across all pages will be ignored or devalued by Google for ranking certain pages? Or by the ranking of website improves for "primary keyword" just because it has been added to all page titles?
Algorithm Updates | | vtmoz0 -
Best and easiest Google Depersonalization method
Hello, Moz hasn't written anything about depersonalization for years. This article has methods, but I don't know if they are valid anymore. What's an easy, effective way to depersonalize Google search these days? I would just log out of Google, but that shows different ranking results than Moz's rank tracker for one of our main keywords, so I don't know if that method is correct. Thanks
Algorithm Updates | | BobGW0 -
Does Google use dateModified or date Published in its SERPs?
I was curious as to the prioritization of dateCreated / datePublished and dateModified in our microdata and how it affects google search results. I have read some entries online that say Google prioritizes dateModified in SERPs, but others that claim they prioritize datePublished or dateCreated. Do you know (or could you point me to some resources) as to whether Google uses dateModified or date Published in its SERPs? Thanks!
Algorithm Updates | | Parse.ly0 -
Can I use schema markup for my Trustpilot results?
Hi we have excellent Trustpilot reviews & want to know if we can include these in schema markup in order for the results to show in SERPs? The Trustpilot results show in PPC but not SERPs. A competitor looks to have no Trustpilot or other independent reviews but is showing 5 stars in SERPs, i also cant find any customer reviews on their site, it looks to be just coding that is driving the SERPs view? Their site is goldencharter.co.uk Any thoughts much appreciated Thanks Ash
Algorithm Updates | | AshShep11 -
Is it possible that Google may have erroneous indexing dates?
I am consulting someone for a problem related to copied content. Both sites in question are WordPress (self hosted) sites. The "good" site publishes a post. The "bad" site copies the post (without even removing all internal links to the "good" site) a few days after. On both websites it is obvious the publishing date of the posts, and it is clear that the "bad" site publishes the posts days later. The content thief doesn't even bother to fake the publishing date. The owner of the "good" site wants to have all the proofs needed before acting against the content thief. So I suggested him to also check in Google the dates the various pages were indexed using Search Tools -> Custom Range in order to have the indexing date displayed next to the search results. For all of the copied pages the indexing dates also prove the "bad" site published the content days after the "good" site, but there are 2 exceptions for the very 2 first posts copied. First post:
Algorithm Updates | | SorinaDascalu
On the "good" website it was published on 30 January 2013
On the "bad" website it was published on 26 February 2013
In Google search both show up indexed on 30 January 2013! Second post:
On the "good" website it was published on 20 March 2013
On the "bad" website it was published on 10 May 2013
In Google search both show up indexed on 20 March 2013! Is it possible to be an error in the date shown in Google search results? I also asked for help on Google Webmaster forums but there the discussion shifted to "who copied the content" and "file a DMCA complain". So I want to be sure my question is better understood here.
It is not about who published the content first or how to take down the copied content, I am just asking if anybody else noticed this strange thing with Google indexing dates. How is it possible for Google search results to display an indexing date previous to the date the article copy was published and exactly the same date that the original article was published and indexed?0 -
Frequency & Percentage of Content Change to get Google to Cache Every Day?
What is the frequency at which your homepage (for example) would have to update and what percentage of the page's content would need to be updated to get cached every day? What are your opinions on other factors.
Algorithm Updates | | bozzie3110 -
Stop google indexing CDN pages
Just when I thought I'd seen it all, google hits me with another nasty surprise! I have a CDN to deliver images, js and css to visitors around the world. I have no links to static HTML pages on the site, as far as I can tell, but someone else may have - perhaps a scraper site? Google has decided the static pages they were able to access through the CDN have more value than my real pages, and they seem to be slowly replacing my pages in the index with the static pages. Anyone got an idea on how to stop that? Obviously, I have no access to the static area, because it is in the CDN, so there is no way I know of that I can have a robots file there. It could be that I have to trash the CDN and change it to only allow the image directory, and maybe set up a separate CDN subdomain for content that only contains the JS and CSS? Have you seen this problem and beat it? (Of course the next thing is Roger might look at google results and start crawling them too, LOL) P.S. The reason I am not asking this question in the google forums is that others have asked this question many times and nobody at google has bothered to answer, over the past 5 months, and nobody who did try, gave an answer that was remotely useful. So I'm not really hopeful of anyone here having a solution either, but I expect this is my best bet because you guys are always willing to try.
Algorithm Updates | | loopyal0 -
Top 5 most optimized websites
Throwing this question out to the community but was wondering if anyone can direct me on how I can find the top 5 or 10 ten sites that have been most optimized for search engines. Meaning which web sites have the best reputation when it comes to website optimization for search engines or is there a resource where I can read about websites that have been ranked as the best when it comes to following best practices and have constantly ranked well within their industry? Figured it's always a good idea to learn from the best by looking at what they are doing. Thank you.
Algorithm Updates | | DRTBA2